
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Tue, 14 Apr 2026 23:03:17 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Partnering to make full-stack fast: deploy PlanetScale databases directly from Workers]]></title>
            <link>https://blog.cloudflare.com/planetscale-postgres-workers/</link>
            <pubDate>Thu, 25 Sep 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ We’ve teamed up with PlanetScale to make shipping full-stack applications on Cloudflare Workers even easier.  ]]></description>
            <content:encoded><![CDATA[ <p>We’re not burying the lede on this one: you can now connect <a href="https://www.cloudflare.com/developer-platform/products/workers/"><u>Cloudflare Workers</u></a> to your PlanetScale databases directly and ship full-stack applications backed by Postgres or MySQL. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3tcLGobPxPIHoDYEiGcY0X/d970a4a6b8a9e6ebc7d06ab57b168007/Frame_1321317798__1_.png" />
          </figure><p>We’ve teamed up with <a href="https://planetscale.com/"><u>PlanetScale</u></a> because we wanted to partner with a database provider that we could confidently recommend to our users: one that shares our obsession with performance, reliability and developer experience. These are all critical factors for any development team building a serious application. </p><p>Now, when connecting to PlanetScale databases, your connections are automatically configured for optimal performance with <a href="https://www.cloudflare.com/developer-platform/products/hyperdrive/"><u>Hyperdrive</u></a>, ensuring that you have the fastest access from your Workers to your databases, regardless of where your Workers are running.</p>
    <div>
      <h3>Building full-stack</h3>
      <a href="#building-full-stack">
        
      </a>
    </div>
    <p>As Workers has matured into a full-stack platform, we’ve introduced more options to facilitate your connectivity to data. With <a href="https://developers.cloudflare.com/kv/"><u>Workers KV</u></a>, we made it easy to store configuration and cache unstructured data on the edge. With <a href="https://www.cloudflare.com/developer-platform/products/d1/"><u>D1</u></a> and <a href="https://www.cloudflare.com/developer-platform/products/durable-objects/"><u>Durable Objects</u></a>, we made it possible to build multi-tenant apps with simple, isolated SQL databases. And with Hyperdrive, we made connecting to external databases fast and scalable from Workers.</p><p>Today, we’re introducing a new choice for building on Cloudflare: Postgres and MySQL PlanetScale databases, directly accessible from within the Cloudflare dashboard. Link your Cloudflare and PlanetScale accounts, stop manually copying API keys back-and-forth, and connect Workers to any of your PlanetScale databases (production or otherwise!).</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/71rXsGZgXWem4yvkhdtHsP/55f9433b5447c09703ef39a547881497/image3.png" />
          </figure><p><sup>Connect to a PlanetScale database — no figuring things out on your own</sup></p><p>Postgres and MySQL are the most popular options for building applications, and with good reason. Many large companies have built and scaled on these databases, providing for a robust ecosystem (like Cloudflare!). And you may want to have access to the power, familiarity, and functionality that these databases provide. </p><p>Importantly, all of this builds on <a href="https://blog.cloudflare.com/it-it/how-hyperdrive-speeds-up-database-access/"><u>Hyperdrive</u></a>, our distributed connection pooler and query caching infrastructure. Hyperdrive keeps connections to your databases warm to avoid incurring latency penalties for every new request, reduces the CPU load on your database by managing a connection pool, and can cache the results of your most frequent queries, removing load from your database altogether. Given that about 80% of queries for a typical transactional database are read-only, this can be substantial — we’ve observed this in reality!</p>
    <div>
      <h3>No more copying credentials around</h3>
      <a href="#no-more-copying-credentials-around">
        
      </a>
    </div>
    <p>Starting today, you can <a href="https://dash.cloudflare.com/?to=/:account/workers/hyperdrive?step=1&amp;modal=1"><u>connect to your PlanetScale databases from the Cloudflare dashboard</u></a> in just a few clicks. Connecting is now secure by default with a one-click password rotation option, without needing to copy and manage credentials back and forth. A Hyperdrive configuration will be created for your PlanetScale database, providing you with the optimal setup to start building on Workers.</p><p>And the experience spans both Cloudflare and PlanetScale dashboards: you can also create and view attached Hyperdrive configurations for your databases from the PlanetScale dashboard.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3I7WyAGXCLY8xhugPlIhl5/0ec38f0248140a628d805df7bb62dcc3/image2.png" />
          </figure><p>By automatically integrating with Hyperdrive, your PlanetScale databases are optimally configured for access from Workers. When you connect your database via Hyperdrive, Hyperdrive’s Placement system automatically determines the location of the database and places its pool of database connections in Cloudflare data centers with the lowest possible latency. </p><p>When one of your Workers connects to your Hyperdrive configuration for your PlanetScale database, Hyperdrive will ensure the fastest access to your database by eliminating the unnecessary roundtrips included in a typical database connection setup. Hyperdrive will resolve connection setup within the Hyperdrive client and use existing connections from the pool to quickly serve your queries. Better yet, Hyperdrive allows you to cache your query results in case you need to scale for high-read workloads. </p><p>This is a peek under the hood of how Hyperdrive makes access to PlanetScale as fast as possible. We’ve previously blogged about <a href="https://blog.cloudflare.com/it-it/how-hyperdrive-speeds-up-database-access/"><u>Hyperdrive’s technical underpinnings</u></a> — it’s worth a read. And with this integration with Hyperdrive, you can easily connect to your databases across different Workers applications or environments, without having to reconfigure your credentials. All in all, a perfect match.</p>
    <div>
      <h3>Get started with PlanetScale and Workers</h3>
      <a href="#get-started-with-planetscale-and-workers">
        
      </a>
    </div>
    <p>With this partnership, we’re making it trivially easy to build on Workers with PlanetScale. Want to build a new application on Workers that connects to your existing PlanetScale cluster? With just a few clicks, you can create a globally deployed app that can query your database, cache your hottest queries, and keep your database connections warmed for fast access from Workers.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3eTtJKz4sxeNvClVQMWIFg/9c91fb02b1cd4eca7ad5ef013e7ab0f0/image4.png" />
          </figure><p><sup><i>Connect directly to your PlanetScale MySQL or Postgres databases from the Cloudflare dashboard, for optimal configuration with Hyperdrive.</i></sup></p><p>To get started, you can:</p><ul><li><p>Head to the <a href="https://dash.cloudflare.com/?to=/:account/workers/hyperdrive?step=1&amp;modal=1"><u>Cloudflare dashboard</u></a> and connect your PlanetScale account</p></li><li><p>… or head to <a href="https://app.planetscale.com/"><u>PlanetScale</u></a> and connect your Cloudflare account</p></li><li><p>… and then deploy a Worker</p></li></ul><p>Review the <a href="https://developers.cloudflare.com/hyperdrive/"><u>Hyperdrive docs</u></a> and/or the <a href="https://planetscale.com/docs"><u>PlanetScale docs</u></a> to learn more about how to connect Workers to PlanetScale and start shipping.</p> ]]></content:encoded>
            <category><![CDATA[Hyperdrive]]></category>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Partnership]]></category>
            <category><![CDATA[Database]]></category>
            <guid isPermaLink="false">7ibt13YouHX6Ew1wLZn5pi</guid>
            <dc:creator>Matt Silverlock</dc:creator>
            <dc:creator>Thomas Gauvin</dc:creator>
            <dc:creator>Adrian Gracia  </dc:creator>
        </item>
        <item>
            <title><![CDATA[Your frontend, backend, and database — now in one Cloudflare Worker]]></title>
            <link>https://blog.cloudflare.com/full-stack-development-on-cloudflare-workers/</link>
            <pubDate>Tue, 08 Apr 2025 14:05:00 GMT</pubDate>
            <description><![CDATA[ You can now deploy static sites and full-stack applications on Cloudflare Workers. Framework support for React Router v7, Astro, Vue and more are generally available today and Cloudflare Vite plugin.  ]]></description>
            <content:encoded><![CDATA[ <p><a href="https://blog.cloudflare.com/builder-day-2024-announcements/#static-asset-hosting"><u>In September 2024</u></a>, we introduced beta support for <a href="https://www.cloudflare.com/developer-platform/solutions/hosting/">hosting</a>, storing, and serving <a href="https://developers.cloudflare.com/workers/static-assets/"><u>static assets</u></a> for free on <a href="https://www.cloudflare.com/developer-platform/products/workers/">Cloudflare Workers</a> — something that was previously only possible on <a href="https://blog.cloudflare.com/cloudflare-pages/"><u>Cloudflare Pages</u></a>. Being able to host these assets — your client-side JavaScript, HTML, CSS, fonts, and images — was a critical missing piece for developers looking to build a full-stack application within a <b>single Worker</b>. </p><p>Today we’re announcing ten big improvements to building apps on Cloudflare. All together, these new additions allow you to build and host projects ranging from simple static sites to full-stack applications, all on Cloudflare Workers:</p><ul><li><p>Cloudflare Workers now provides production ready, <b>generally available</b> (GA) support for <a href="https://developers.cloudflare.com/workers/frameworks/framework-guides/remix/"><u>React Router v7 (Remix)</u></a>, <a href="https://developers.cloudflare.com/workers/frameworks/framework-guides/astro/"><u>Astro</u></a>, <a href="https://developers.cloudflare.com/workers/frameworks/framework-guides/hono/"><u>Hono</u></a>, <a href="https://developers.cloudflare.com/workers/frameworks/framework-guides/vue/"><u>Vue.js</u></a>, <a href="https://developers.cloudflare.com/workers/frameworks/framework-guides/nuxt/"><u>Nuxt</u></a>, <a href="https://developers.cloudflare.com/workers/frameworks/framework-guides/svelte/"><u>Svelte (SvelteKit)</u></a>, and <a href="https://developers.cloudflare.com/workers/frameworks/"><u>more</u></a>, with GA support for more frameworks including <a href="https://developers.cloudflare.com/workers/frameworks/framework-guides/nextjs/"><u>Next.js</u></a>, <a href="https://developers.cloudflare.com/workers/frameworks/framework-guides/angular/"><u>Angular</u></a>, and <a href="https://developers.cloudflare.com/workers/frameworks/framework-guides/solid/"><u>SolidJS</u></a> (SolidStart) to follow in Q2 2025. </p></li><li><p>You can build complete full-stack apps on Workers without a framework: you can “<a href="https://blog.cloudflare.com/introducing-the-cloudflare-vite-plugin/"><u>just use Vite</u></a>" and React together, and build a backend API in the same Worker. See our <a href="https://github.com/cloudflare/templates/tree/staging/vite-react-template"><u>Vite + React template</u></a> for an example.</p></li><li><p>The adapter for Next.js — <a href="https://opennext.js.org/cloudflare"><u>@opennextjs/cloudflare</u></a>, introduced in September 2024 as an early alpha, <a href="https://blog.cloudflare.com/deploying-nextjs-apps-to-cloudflare-workers-with-the-opennext-adapter"><u>is now v1.0-beta</u></a>, and will be GA in the coming weeks. Those using the OpenNext adapter will also be able to easily upgrade to the <a href="https://github.com/vercel/next.js/discussions/77740"><u>recently announced Next.js Deployments API</u></a>. </p></li><li><p>The <a href="https://blog.cloudflare.com/introducing-the-cloudflare-vite-plugin"><u>Cloudflare Vite plugin</u></a> is now v1.0 and generally available. The Vite plugin allows you to run Vite’s development server in the Workers runtime (<code>workerd</code>), meaning you get all the benefits of Vite, including <a href="https://vite.dev/guide/features.html#hot-module-replacement"><u>Hot Module Replacement</u></a>, while still being able to use features that are exclusive to Workers (like Durable Objects).</p></li><li><p>You can now use static <a href="https://developers.cloudflare.com/workers/static-assets/headers/"><u>_headers</u></a> and <a href="https://developers.cloudflare.com/workers/static-assets/redirects/"><u>_redirects</u></a> configuration files for your applications on Workers, something that was previously only available on Pages. These files allow you to add simple headers and configure redirects without executing any Worker code. </p></li><li><p>In addition to <a href="https://developers.cloudflare.com/hyperdrive/configuration/connect-to-postgres/"><u>PostgreSQL</u></a>, you can now connect to <a href="https://blog.cloudflare.com/building-global-mysql-apps-with-cloudflare-workers-and-hyperdrive"><u>MySQL databases in addition from Cloudflare Workers, via Hyperdrive</u></a>. Bring your existing Planetscale, AWS, GCP, Azure, or other MySQL database, and Hyperdrive will take care of pooling connections to your database and eliminating unnecessary roundtrips by caching queries.</p></li><li><p><a href="#node-js-compatibility"><u>More Node.js APIs are available</u></a> in the Workers Runtime — including APIs from the <code>crypto</code>, <code>tls</code>, <code>net</code>, and <code>dns </code>modules. We’ve also increased the maximum CPU time for a Workers request from 30 seconds to 5 minutes.</p></li><li><p>You can now <a href="https://blog.cloudflare.com/deploy-workers-applications-in-seconds"><u>bring any repository from GitHub or GitLab that contains a Worker application</u></a>, and <a href="https://developers.cloudflare.com/workers/ci-cd/builds/"><u>Workers Builds</u></a> will take care of deploying the app as a new Worker on your account. <a href="#workers-builds"><u>Workers Builds is also starting much more quickly</u></a> (by up to 6 seconds for every build). </p></li><li><p>You can now set up Workers Builds to <a href="https://developers.cloudflare.com/workers/ci-cd/builds/build-branches/#configure-non-production-branch-builds"><u>run on non-production branches</u></a>, and preview URLs will be <a href="https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/github-integration/#pull-request-comment"><u>posted back to GitHub as a comment</u></a>. </p></li><li><p>The <a href="https://developers.cloudflare.com/images/transform-images/bindings/"><u>Images binding in Workers</u></a> is generally available, allowing you to build more flexible, programmatic workflows. </p></li></ul><p>These improvements allow you to build both simple static sites and more complex server-side rendered applications. Like <a href="https://www.cloudflare.com/developer-platform/products/pages/">Pages</a>, you only get charged when your Worker code runs, meaning you can host and serve static sites for free. When you want to do any rendering on the server or need to build an API, simply add a Worker to handle your backend. And when you need to read or write data in your app, you can connect to an existing database with <a href="https://developers.cloudflare.com/hyperdrive/"><u>Hyperdrive</u></a>, or use any of our storage solutions: <a href="https://developers.cloudflare.com/kv/"><u>Workers KV</u></a>, <a href="https://developers.cloudflare.com/r2/"><u>R2</u></a>, <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a>, or <a href="https://developers.cloudflare.com/d1/"><u>D1</u></a>. </p><p>If you'd like to dive straight into code, you can deploy a single-page application built with Vite and React, with the option to connect to a hosted database with Hyperdrive, by clicking this “Deploy to Cloudflare” button: </p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/react-postgres-fullstack-template"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/56hrgUpOebvOdzbI8j6liw/bc9e7d01dde8cb6a8a4623aec3abc883/1.jpg" />
          </figure>
    <div>
      <h2>Start with Workers</h2>
      <a href="#start-with-workers">
        
      </a>
    </div>
    <p>Previously, you needed to choose between building on Cloudflare Pages or Workers (or use Pages for one part of your app, and Workers for another) just to get started. This meant figuring out what your app needed from the start, and hoping that if your project evolved, you wouldn’t be stuck with the wrong platform and architecture. Workers was designed to be a flexible platform, allowing developers to evolve projects as needed — and so, we’ve <a href="https://blog.cloudflare.com/pages-and-workers-are-converging-into-one-experience/"><u>worked to bring pieces of Pages into Workers</u></a> over the years.  </p><p>Now that Workers supports both serving static assets <b>and </b>server-side rendering, you should <b>start with Workers</b>. Cloudflare Pages will continue to be supported, but, going forward, all of our investment, optimizations, and feature work will be dedicated to improving Workers. We aim to make Workers the best platform for building full-stack apps, building upon your feedback of what went well with Pages and what we could improve. </p><p>Before, building an app on Pages meant you got a really easy, opinionated on-ramp, but you’d eventually hit a wall if your application got more complex. If you wanted to use Durable Objects to manage state, you would need to set up an entirely separate Worker to do so, ending up with a complicated deployment and more overhead. You also were limited to real-time logs, and could only roll out changes all in one go. </p><p>When you build on Workers, you can immediately bind to any other Developer Platform service (including <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a>, <a href="https://developers.cloudflare.com/email-routing/email-workers/"><u>Email Workers</u></a>, and more), and manage both your front end and back end in a single project — all with a single deployment. You also get the whole suite of <a href="https://developers.cloudflare.com/workers/observability/"><u>Workers observability</u></a> tooling built into the platform, such as <a href="https://developers.cloudflare.com/workers/observability/logs/workers-logs/"><u>Workers Logs</u></a>. And if you want to rollout changes to only a certain percentage of traffic, you can do so with <a href="https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/"><u>Gradual Deployments</u></a>.  </p><p>These latest improvements are part of our goal to bring the best parts of Pages into Workers. For example, we now support static  <a href="https://developers.cloudflare.com/workers/static-assets/headers/"><u>_headers</u></a> and <a href="https://developers.cloudflare.com/workers/static-assets/redirects/"><u>_redirects</u></a> config files, so that you can easily take an existing project from Pages (or another platform) and move it over to Workers, without needing to change your project. We also directly integrate with GitHub and GitLab with <a href="https://developers.cloudflare.com/workers/ci-cd/builds/"><u>Workers Builds</u></a>, providing automatic builds and deployments. And starting today, <a href="https://developers.cloudflare.com/workers/configuration/previews/"><u>Preview URLs</u></a> are <a href="https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/github-integration/#pull-request-comment"><u>posted back to your repository as a comment</u></a>, with feature branch aliases and environments coming soon. </p><p>To learn how to migrate an existing project from Pages to Workers, read our <a href="https://developers.cloudflare.com/workers/static-assets/migrate-from-pages/"><u>migration guide</u></a>. </p><p>Next, let’s talk about how you can build applications with different rendering modes on Workers.  </p>
    <div>
      <h2>Building static sites, SPAs, and SSR on Workers</h2>
      <a href="#building-static-sites-spas-and-ssr-on-workers">
        
      </a>
    </div>
    <p>As a quick primer, here are all the architectures and rendering modes we’ll be discussing that are supported on Workers: </p><ul><li><p><b>Static sites</b>: When you visit a static site, the server immediately returns pre-built static assets — HTML, CSS, JavaScript, images, and fonts. There’s no dynamic rendering happening on the server at request-time. Static assets are typically generated at build-time and served directly from a <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/"><u>CDN</u></a>, making static sites fast and easily cacheable. This approach works well for sites with content that rarely changes. </p></li><li><p><b>Single-Page Applications (SPAs)</b>:  When you load an SPA, the server initially sends a minimal HTML shell and a JavaScript bundle (served as static assets). Your browser downloads this JavaScript, which then takes over to render the entire user interface client-side. After the initial load, all navigation occurs without full-page refreshes, typically via client-side routing. This creates a fast, app-like experience. </p></li><li><p><b>Server-Side Rendered (SSR) applications</b>: When you first visit a site that uses SSR, the server generates a fully-rendered HTML page on-demand for that request. Your browser immediately displays this complete HTML, resulting in a fast first page load. Once loaded, JavaScript "<a href="https://en.wikipedia.org/wiki/Hydration_(web_development)"><u>hydrates</u></a>" the page, adding interactivity. Subsequent navigations can either trigger new server-rendered pages or, in many modern frameworks, transition into client-side rendering similar to an SPA.</p></li></ul><p>Next, we’ll dive into how you can build these kinds of applications on Workers, starting with setting up your development environment. </p>
    <div>
      <h3>Setup: build and dev</h3>
      <a href="#setup-build-and-dev">
        
      </a>
    </div>
    <p>Before uploading your application, you need to bundle all of your client-side code into a directory of <b>static assets</b>. Wrangler bundles and builds your code when you run <code>wrangler dev</code>, but we also now support Vite with our <a href="https://www.npmjs.com/package/@cloudflare/vite-plugin"><u>new Vite plugin</u></a>. This is a great option for those already using Vite’s build tooling and development server — you can continue developing (and testing with <a href="https://developers.cloudflare.com/workers/testing/vitest-integration/"><u>Vitest</u></a>) using Vite’s development server, all using the Workers runtime. </p><p>To get started using the Cloudflare Vite plugin, you can scaffold a React application using Vite and our plugin, by running: </p>
            <pre><code>npm create cloudflare@latest my-react-app -- --framework=react</code></pre>
            <p>When you open the project, you should see a directory structure like this: </p>
            <pre><code>...
├── api
│   └── index.ts
├── public
│   └── ...
├── src
│   └── ...
...
├── index.html
├── package.json
├── vite.config.ts
└── wrangler.jsonc</code></pre>
            <p>If you run <code>npm run build</code>, you’ll see a new folder appear, named <code>/dist</code>. </p>
            <pre><code>...
├── api
│   └── index.ts
├── dist
│   └── ...
├── public
│   └── ...
├── src
│   └── ...
...
├── index.html
├── package.json
├── vite.config.ts
└── wrangler.jsonc</code></pre>
            <p>The Vite plugin informs Wrangler that this <code>/dist</code> directory contains the project’s built static assets — which, in this case, includes client-side code, some CSS files, and images. </p><p>Once deployed, this single-page application (SPA) architecture will look something like this: </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6syBGxC8udJi8wlsqvLTXW/526d05095a953cb6d29526abfd4f4b3a/2.jpg" />
          </figure><p>When a request comes in, Cloudflare looks at the pathname and automatically serves any static assets that match that pathname. For example, if your static assets directory includes a <code>blog.html</code> file, requests for <code>example.com/blog</code> get that file. </p>
    <div>
      <h3>Static sites</h3>
      <a href="#static-sites">
        
      </a>
    </div>
    <p>If you have a static site created by a static site generator (SSG) like <a href="https://docs.astro.build/en/concepts/why-astro/"><u>Astro</u></a>, all you need to do is create a <code>wrangler.jsonc</code> file (or <code>wrangler.toml</code>) and tell Cloudflare where to find your built assets: </p>
            <pre><code>// wrangler.jsonc 

{
  "name": "my-static-site",
  "compatibility_date": "2025-04-01",
  "assets": {
    "directory": "./dist",
  }
}</code></pre>
            <p>Once you’ve added this configuration, you can simply build your project and run wrangler deploy.  Your entire site will then be uploaded and ready for traffic on Workers. Once deployed and requests start flowing in, your static site will be <a href="https://developers.cloudflare.com/workers/static-assets/#caching-behavior"><u>cached across Cloudflare’s network</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/OaFnoUfvhzfjwv537fh5S/763524c2ed3b2beb61304576723667ea/3.jpg" />
          </figure><p>You can try starting a fresh Astro project on Workers today by running:</p>
            <pre><code>npm create cloudflare@latest my-astro-app -- --framework=astro</code></pre>
            <p>You can see our other supported Frameworks and how to get started in our <a href="https://developers.cloudflare.com/workers/frameworks/"><u>framework guides</u></a>. </p>
    <div>
      <h3>Single-page applications (SPAs) </h3>
      <a href="#single-page-applications-spas">
        
      </a>
    </div>
    <p>If you have a single-page application, you can explicitly enable <code>single-page-application</code> mode in your Wrangler configuration: </p>
            <pre><code>{
 "name": "example-spa-worker-hyperdrive",
 "main": "api/index.js",
 "compatibility_flags": ["nodejs_compat"],
 "compatibility_date": "2025-04-01",
 },
 "assets": {
   "directory": "./dist",
   "binding": "ASSETS",
   "not_found_handling": "single-page-application"
 },
 "hyperdrive": [
   {
     "binding": "HYPERDRIVE",
     "id": "d9c9cfb2587f44ee9b0730baa692ffec",
     "localConnectionString": "postgresql://myuser:mypassword@localhost:5432/mydatabase"
   }
 ],
 "placement": {
   "mode": "smart"
 }
}</code></pre>
            <p>By enabling this, the platform assumes that any navigation request (requests which include a <code>Sec-Fetch-Mode: navigate</code> header) are intended for static assets and will serve up <code>index.html</code> whenever a matching static asset match cannot be found. For non-navigation requests (such as requests for data) that don't match a static asset, Cloudflare will invoke the Worker script. With this setup, you can render the frontend with React, use a Worker to handle back-end operations, and use Vite to help stitch the two together. This is a great option for porting over older SPAs built with <code>create-react-app</code>, <a href="https://react.dev/blog/2025/02/14/sunsetting-create-react-app"><u>which was recently sunset</u></a>. </p><p>Another thing to note in this Wrangler configuration file: we’ve defined a Hyperdrive binding and enabled <a href="https://developers.cloudflare.com/workers/configuration/smart-placement/"><u>Smart Placement</u></a>. Hyperdrive lets us use an existing database<i> and</i> handles connection pooling. This solves a long-standing challenge of connecting Workers (which run in a highly distributed, serverless environment) directly to traditional databases. By design, Workers operate in lightweight V8 isolates with no persistent TCP sockets and a strict CPU/memory limit. This isolation is great for security and speed, but it makes it difficult to hold open database connections. Hyperdrive addresses these constraints by acting as a “bridge” between Cloudflare’s network and your database, taking care of the heavy lifting of maintaining stable connections or pools so that Workers can reuse them.  By turning on Smart Placement, we also ensure that if requests to our Worker originate far from the database (causing latency), Cloudflare can choose to relocate both the Worker—which handles the database connection—and the Hyperdrive “bridge” to a location closer to the database, ​​reducing round-trip times. </p>
    <div>
      <h4>SPA example: Worker code</h4>
      <a href="#spa-example-worker-code">
        
      </a>
    </div>
    <p>Let’s look at the <a href="https://github.com/korinne/example-spa-worker"><u>“Deploy to Cloudflare” example</u></a> at the top of this blog. In <code>api/index.js</code>, we’ve defined an API (using Hono) which connects to a hosted database through Hyperdrive. </p>
            <pre><code>import { Hono } from "hono";
import postgres from "postgres";
import booksRouter from "./routes/books";
import bookRelatedRouter from "./routes/book-related";

const app = new Hono();

// Setup SQL client middleware
app.use("*", async (c, next) =&gt; {
 // Create SQL client
 const sql = postgres(c.env.HYPERDRIVE.connectionString, {
   max: 5,
   fetch_types: false,
 });

 c.env.SQL = sql;

 // Process the request
 await next();

 // Close the SQL connection after the response is sent
 c.executionCtx.waitUntil(sql.end());
});

app.route("/api/books", booksRouter);
app.route("/api/books/:id/related", bookRelatedRouter);


export default {
 fetch: app.fetch,
};</code></pre>
            <p>When deployed, our app’s architecture looks something like this: </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/56hrgUpOebvOdzbI8j6liw/bc9e7d01dde8cb6a8a4623aec3abc883/1.jpg" />
          </figure><p>If Smart Placement moves the placement of my Worker to run closer to my database, it could look like this: </p><div>
  
</div>
<p></p>
    <div>
      <h3>Server-Side Rendering (SSR)</h3>
      <a href="#server-side-rendering-ssr">
        
      </a>
    </div>
    <p>If you want to handle rendering on the server, we support a number of popular full-stack <a href="https://developers.cloudflare.com/workers/frameworks/"><u>frameworks</u></a>. </p><p>Here’s a version of our previous example, now using React Router v7’s server-side rendering:</p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/react-router-postgres-ssr-template"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p>You could also use Next.js with the <a href="https://opennext.js.org/cloudflare"><u>OpenNext adapter</u></a>, or any other <a href="https://developers.cloudflare.com/workers/frameworks/"><u>framework listed in our framework guides</u></a>. </p>
    <div>
      <h2>Deploy to Workers, with as few changes as possible</h2>
      <a href="#deploy-to-workers-with-as-few-changes-as-possible">
        
      </a>
    </div>
    
    <div>
      <h3>Node.js compatibility</h3>
      <a href="#node-js-compatibility">
        
      </a>
    </div>
    <p>We’ve also continued to make progress supporting Node.js APIs, recently adding support for the <code>crypto</code>, <code>tls</code>, <code>net</code>, and <code>dns</code> modules. This allows existing applications and libraries that rely on these Node.js modules to run on Workers. Let’s take a look at an example:</p><p>Previously, if you tried to use the <code>mongodb</code> package, you encountered the following error:</p>
            <pre><code>Error: [unenv] dns.resolveTxt is not implemented yet!</code></pre>
            <p>This occurred when <code>mongodb</code> used the <code>node:dns</code> module to do a DNS lookup of a hostname. Even if you avoided that issue, you would have encountered another error when <code>mongodb</code> tried to use <code>node:tls</code> to securely connect to a database.</p><p>Now, you can use <code>mongodb</code> as expected because <code>node:dns</code> and <code>node:tls</code> are supported. The same can be said for libraries relying on <code>node:crypto</code> and <code>node:net</code>.</p><p>Additionally, Workers <a href="https://developers.cloudflare.com/changelog/2025-03-11-process-env-support/"><u>now expose environment variables and secrets on the process.env object</u></a> when the <code>nodejs_compat</code> compatibility flag is on and the compatibility date is set to <code>2025-04-01</code> or beyond. Some libraries (and developers) assume that this object will be populated with variables, and rely on it for top-level configuration. Without the tweak, libraries may have previously broken unexpectedly and developers had to write additional logic to handle variables on Cloudflare Workers.</p><p>Now, you can just access your variables as you would in Node.js.</p>
            <pre><code>const LOG_LEVEL = process.env.LOG_LEVEL || "info";</code></pre>
            
    <div>
      <h3>Additional Worker CPU time</h3>
      <a href="#additional-worker-cpu-time">
        
      </a>
    </div>
    <p>We have also <a href="https://developers.cloudflare.com/changelog/2025-03-25-higher-cpu-limits/"><u>raised the maximum CPU time per Worker request</u></a> from 30 seconds to 5 minutes. This allows for compute-intensive operations to run for longer without timing out. Say you want to use the newly supported <code>node:crypto</code> module to hash a very large file, you can now do this on Workers without having to rely on external compute for CPU-intensive operations.</p>
    <div>
      <h3>Workers Builds </h3>
      <a href="#workers-builds">
        
      </a>
    </div>
    <p>We’ve also made improvements to <a href="https://developers.cloudflare.com/workers/ci-cd/builds/"><u>Workers Builds</u></a>, which allows you to connect a Git repository to your Worker, so that you can have automatic builds and deployments on every pushed change. Workers Builds was introduced during <a href="https://blog.cloudflare.com/builder-day-2024-announcements/#continuous-integration-and-delivery"><u>Builder Day 2024</u></a>, and initially only allowed you to connect a repository to an existing Worker. Now, you can bring a repository and <a href="https://blog.cloudflare.com/deploy-workers-applications-in-seconds/"><u>immediately deploy it as a new Worker</u></a>, reducing the amount of setup and button clicking needed to bring a project over. We’ve improved the performance of Workers Builds by reducing the latency of build starts by <b>6 seconds</b> — they now start within <b>10 seconds</b> on average. We also boosted API responsiveness, achieving a <b>7x </b>latency improvement thanks to Smart Placement. </p><ul><li><p><b>Note</b>: On April 2, 2025, Workers Builds transitioned to a new pricing model, as announced during <a href="https://blog.cloudflare.com/builder-day-2024-announcements/"><u>Builder Day 2024</u></a>. Free plan users are now capped at 3,000 minutes of build time, and Workers Paid subscription users will have a new usage-based model with 6,000 free minutes included and $0.005 per build minute pricing after. To better support concurrent builds, Paid plans will also now get six (6) concurrent builds, making it easier to work across multiple projects and monorepos. For more information on pricing, see the <a href="https://developers.cloudflare.com/workers/ci-cd/builds/limits-and-pricing/"><u>documentation</u></a>.</p></li></ul><p>You can also set up Workers Builds to <a href="https://developers.cloudflare.com/workers/ci-cd/builds/build-branches/#configure-non-production-branch-builds"><u>run on non-production branches</u></a>, and preview URLs will be <a href="https://developers.cloudflare.com/workers/ci-cd/builds/git-integration/github-integration/#pull-request-comment"><u>posted back to GitHub as a comment</u></a>. </p>
    <div>
      <h3>Bind the Images API to your Worker</h3>
      <a href="#bind-the-images-api-to-your-worker">
        
      </a>
    </div>
    <p>Last week, we wrote a <a href="https://blog.cloudflare.com/improve-your-media-pipelines-with-the-images-binding-for-cloudflare-workers/"><u>blog post</u></a> that covers how the Images binding enables more flexible, programmatic workflows for image optimization.</p><p>Previously, you could access image optimization features by calling <code>fetch()</code> in your Worker. This method requires the original image to be retrievable by URL. However, you may have cases where images aren’t accessible from a URL, like when you want to compress user-uploaded images before they are uploaded to your storage. With the Images binding, you can directly optimize an image by operating on its body as a stream of bytes.</p><p>To learn more, read our guide on <a href="https://developers.cloudflare.com/images/tutorials/optimize-user-uploaded-image"><u>transforming an image before it gets uploaded to R2</u></a>.</p>
    <div>
      <h2>Start building today</h2>
      <a href="#start-building-today">
        
      </a>
    </div>
    <p>We’re excited to see what you’ll build, and are focused on new features and improvements to make it  easier to create any application on Workers. Much of this work was made even better by community feedback, and we encourage everyone to <a href="https://discord.com/invite/cloudflaredev"><u>join our Discord</u></a> to participate in the discussion. </p><p><b>Helpful resources to get you started:</b></p><ul><li><p><a href="https://developers.cloudflare.com/workers/frameworks/"><u>Framework guides</u></a> </p></li><li><p><a href="https://developers.cloudflare.com/workers/static-assets/migrate-from-pages/"><u>Migration guide</u></a> </p></li><li><p><a href="https://developers.cloudflare.com/workers/static-assets/"><u>Static assets documentation</u></a> </p></li><li><p><a href="https://developers.cloudflare.com/workers/vite-plugin"><u>Cloudflare Vite plugin documentation</u></a></p></li></ul><p></p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Front End]]></category>
            <category><![CDATA[Full Stack]]></category>
            <category><![CDATA[General Availability]]></category>
            <category><![CDATA[Cloudflare Pages]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[MySQL]]></category>
            <category><![CDATA[Hyperdrive]]></category>
            <guid isPermaLink="false">67CgcpMED2Rw0BozjKbdUz</guid>
            <dc:creator>Korinne Alpers</dc:creator>
        </item>
        <item>
            <title><![CDATA[Pools across the sea: how Hyperdrive speeds up access to databases and why we’re making it free]]></title>
            <link>https://blog.cloudflare.com/how-hyperdrive-speeds-up-database-access/</link>
            <pubDate>Tue, 08 Apr 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Hyperdrive, Cloudflare's global connection pooler, relies on some key innovations to make your database connections work. Let's dive deeper, in celebration of its availability for Free Plan customers. ]]></description>
            <content:encoded><![CDATA[ <p></p><p></p>
    <div>
      <h2>Free as in beer</h2>
      <a href="#free-as-in-beer">
        
      </a>
    </div>
    <p>In acknowledgement of its pivotal role in building distributed applications that rely on regional databases, we’re making Hyperdrive available on the free plan of Cloudflare Workers!</p><p><a href="https://developers.cloudflare.com/hyperdrive/"><u>Hyperdrive</u></a> enables you to build performant, global apps on Workers with <a href="https://developers.cloudflare.com/hyperdrive/examples/"><u>your existing SQL databases</u></a>. Tell it your database connection string, bring your existing drivers, and Hyperdrive will make connecting to your database faster. No major <a href="https://www.cloudflare.com/learning/cloud/how-to-refactor-applications/">refactors</a> or convoluted configuration required.</p><p>Over the past year, Hyperdrive has become a key service for teams that want to build their applications on Workers and connect to SQL databases. This includes our own engineering teams, with Hyperdrive serving as the tool of choice to connect from Workers to our own Postgres clusters for many of the control-plane actions of our billing, <a href="https://www.cloudflare.com/developer-platform/products/d1/"><u>D1</u></a>, <a href="https://www.cloudflare.com/developer-platform/products/r2/"><u>R2</u></a>, and <a href="https://www.cloudflare.com/developer-platform/products/workers-kv/"><u>Workers KV</u></a> teams (just to name a few). </p><p>This has highlighted for us that Hyperdrive is a fundamental building block, and it solves a common class of problems for which there isn’t a great alternative. We want to make it possible for everyone building on Workers to connect to their database of choice with the best performance possible, using the drivers and frameworks they already know and love.</p>
    <div>
      <h3>Performance is a feature</h3>
      <a href="#performance-is-a-feature">
        
      </a>
    </div>
    <p>To illustrate how much Hyperdrive can improve your application’s performance, let’s write the world’s simplest benchmark. This is obviously not production code, but is meant to be reflective of a common application you’d bring to the Workers platform. We’re going to use a simple table, a very popular OSS driver (<a href="https://github.com/porsager/postgres"><u>postgres.js</u></a>), and run a standard OLTP workload from a Worker. We’re going to keep our origin database in London, and query it from Chicago (those locations will come back up later, so keep them in mind).</p>
            <pre><code>// This is the test table we're using
// CREATE TABLE IF NOT EXISTS test_data(userId bigint, userText text, isActive bool);

import postgres from 'postgres';

let direct_conn = '&lt;direct connection string here!&gt;';
let hyperdrive_conn = env.HYPERDRIVE.connectionString;

async function measureLatency(connString: string) {
	let beginTime = Date.now();
	let sql = postgres(connString);

	await sql`INSERT INTO test_data VALUES (${999}, 'lorem_ipsum', ${true})`;
	await sql`SELECT userId, userText, isActive FROM test_data WHERE userId = ${999}`;

	let latency = Date.now() - beginTime;
	ctx.waitUntil(sql.end());
	return latency;
}

let directLatency = await measureLatency(direct_conn);
let hyperdriveLatency = await measureLatency(hyperdrive_conn);</code></pre>
            <p>The code above</p><ol><li><p>Takes a standard database connection string, and uses it to create a database connection.</p></li><li><p>Loads a user record into the database.</p></li><li><p>Queries all records for that user.</p></li><li><p>Measures how long this takes to do with a direct connection, and with Hyperdrive.</p></li></ol><p>When connecting directly to the origin database, this set of queries takes an average of 1200 ms. With absolutely no other changes, just swapping out the connection string for <code>env.HYPERDRIVE.connectionString</code>, this number is cut down to 500 ms (an almost 60% reduction). If you enable Hyperdrive’s caching, so that the SELECT query is served from cache, this takes only 320 ms. With this one-line change, Hyperdrive will reduce the latency of this Worker by almost 75%! In addition to this speedup, you also get secure auth and transport, as well as a connection pool to help protect your database from being overwhelmed when your usage scales up. See it for yourself using our <a href="https://hyperdrive-demo.pages.dev/"><u>demo application</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6TTbEV9d7NClGk0iRmkEG3/4d5e4fdeb195337a92942bc7e13dbb6f/image7.png" />
          </figure><p><sup><i>A demo application comparing latencies between Hyperdrive and direct-to-database connections.</i></sup></p><p>Traditional SQL databases are familiar and powerful, but they are designed to be colocated with long-running compute. They were not conceived in the era of modern serverless applications, and have connection models that don't take the constraints of such an environment into account. Instead, they require highly stateful connections that do not play well with Workers’ global and stateless model. Hyperdrive solves this problem by maintaining database connections across Cloudflare’s network ready to be used at a moment’s notice, caching your queries for fast access, and eliminating round trips to minimize network latency.</p><p>With this announcement, many developers are going to be taking a look at Hyperdrive for the first time over the coming weeks and months. To help people dive in and try it out, we think it’s time to talk about how Hyperdrive actually works.</p>
    <div>
      <h2>Staying warm in the pool</h2>
      <a href="#staying-warm-in-the-pool">
        
      </a>
    </div>
    <p>Let’s talk a bit about database connection poolers, how they work, and what problems they already solve. They are <a href="https://github.com/pgbouncer/pgbouncer/commit/a0d2b294e0270f8a246e5b98f0700716c0672b0d"><u>hardly a new technology</u></a>, after all. </p><p>The point of any connection pooler, Hyperdrive or others, is to minimize the overhead of establishing and coordinating database connections. Every new database connection requires additional <a href="https://blog.anarazel.de/2020/10/07/measuring-the-memory-overhead-of-a-postgres-connection/"><u>memory</u></a> and CPU time from the database server, and this can only scale just so well as the number of concurrent connections climbs. So the question becomes, how should database connections be shared across clients? </p><p>There are three <a href="https://www.pgbouncer.org/features.html"><u>commonly-used approaches</u></a> for doing so. These are:</p><ul><li><p><b>Session mode:</b> whenever a client connects, it is assigned a connection of its own until it disconnects. This dramatically reduces the available concurrency, in exchange for much simpler implementation and a broader selection of supported features</p></li><li><p><b>Transaction mode:</b> when a client is ready to send a query or open a transaction, it is assigned a connection on which to do so. This connection will be returned to the pool when the query or transaction concludes. Subsequent queries during the same client session may (or may not) be assigned a different connection.</p></li><li><p><b>Statement mode:</b> Like transaction mode, but a connection is given out and returned for each statement. Multi-statement transactions are disallowed.</p></li></ul><p>When building Hyperdrive, we had to decide which of these modes we wanted to use. Each of the approaches implies some <a href="https://jpcamara.com/2023/04/12/pgbouncer-is-useful.html"><u>fairly serious tradeoffs</u></a>, so what’s the right choice? For a service intended to make using a database from Workers as pleasant as possible we went with the choice that balances features and performance, and designed Hyperdrive as a transaction-mode pooler. This best serves the goals of supporting a large number of short-lived clients (and therefore very high concurrency), while still supporting the transactional semantics that cause so many people to reach for an RDBMS in the first place.</p><p>In terms of this part of its design, Hyperdrive takes its cues from many pre-existing popular connection poolers, and manages operations to allow our users to focus on designing their full-stack applications. There is a configured limit to the number of connections the pool will give out, limits to how long a connection will be held idle until it is allowed to drop and return resources to the database, bookkeeping around <a href="https://blog.cloudflare.com/elephants-in-tunnels-how-hyperdrive-connects-to-databases-inside-your-vpc-networks/"><u>prepared statements</u></a> being shared across pooled connections, and other traditional concerns of the management of these resources to help ensure the origin database is able to run smoothly. These are all described in <a href="https://developers.cloudflare.com/hyperdrive/platform/limits/"><u>our documentation</u></a>.</p>
    <div>
      <h2>Round and round we go</h2>
      <a href="#round-and-round-we-go">
        
      </a>
    </div>
    <p>Ok, so why build Hyperdrive then? Other poolers that solve these problems already exist — couldn’t developers using Workers just run one of those and call it a day? It turns out that connecting to regional poolers from Workers has the same major downside as connecting to regional databases: network latency and round trips.</p><p>Establishing a connection, whether to a database or a pool, requires many exchanges between the client and server. While this is true for all fully-fledged client-server databases (e.g. <a href="https://dev.mysql.com/doc/dev/mysql-server/latest/page_protocol_connection_phase.html"><u>MySQL</u></a>, <a href="https://github.com/mongodb/specifications/blob/master/source/auth/auth.md"><u>MongoDB</u></a>), we are going to focus on the <a href="https://www.postgresql.org/docs/current/protocol-flow.html#PROTOCOL-FLOW-START-UP"><u>PostgreSQL</u></a> connection protocol flow in this post. As we work through all of the steps involved, what we most want to keep track of is how many round trips it takes to accomplish. Note that we’re mostly concerned about having to wait around while these happen, so “half” round trips such as in the first diagram are not counted. This is because we can send off the message and then proceed without waiting.</p><p>The first step to establishing a connection between Postgres client and server is very familiar ground to anyone who’s worked much with networks: <a href="https://www.cloudflare.com/learning/ddos/glossary/tcp-ip/"><u>a TCP startup handshake</u></a>. Postgres uses TCP for its underlying transport, and so we must have that connection before anything else can happen on top of it.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/58qrxBsbOXbFCFBzkZFIff/19caf62d24cdbf9c4ad69bfd8286e022/image5.png" />
          </figure><p>With our transport layer in place, the next step is to <a href="https://www.postgresql.org/docs/current/protocol-flow.html#PROTOCOL-FLOW-SSL"><u>encrypt</u></a> the connection. The <a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/"><u>TLS Handshake</u></a> involves some back-and-forth in its own right, though this has been reduced to just one round trip for TLS 1.3. Below is the simplest and fastest version of this exchange, but there are certainly scenarios where it can be much more complex.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5q7fVVQkB9Q43X3eaE76GP/b69c0ce964df370bd0609242f8e3de0c/image4.png" />
          </figure><p>After the underlying transport is established and secured, the application-level traffic can actually start! However, we’re not quite ready for queries, the client still needs to authenticate to a specific user and database. Again, there are multiple supported approaches that offer varying levels of speed and security. To make this comparison as fair as possible, we’re again going to consider the version that offers the fastest startup (password-based authentication).</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7KU6NHgZAW95nQyobo9Zwn/5f61d6e9ab6233186c865a9093a7f352/image8.png" />
          </figure><p>So, for those keeping score, establishing a new connection to your database takes a bare minimum of 5 round trips, and can very quickly climb from there. </p><p>While the latency of any given network round trip is going to vary based on so many factors that “it depends” is the only meaningful measurement available, some quick benchmarking during the writing of this post shows ~125 ms from Chicago to London. Now multiply that number by 5 round trips and the problem becomes evident: 625 ms to start up a connection is not viable in a distributed serverless environment. So how does Hyperdrive solve it? What if I told you the trick is that we do it all twice? To understand Hyperdrive’s secret sauce, we need to dive into Hyperdrive’s architecture.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/291ua8XgVnowWDOfEm05eR/a2674a9a393fcaaef8e2cfe64dd57402/image1.png" />
          </figure>
    <div>
      <h2>Impersonating a database server</h2>
      <a href="#impersonating-a-database-server">
        
      </a>
    </div>
    <p>The rest of this post is a deep dive into answering the question of how Hyperdrive does what it does. To give the clearest picture, we’re going to talk about some internal subsystems by name. To help keep everything straight, let’s start with a short glossary that you can refer back to if needed. These descriptions may not make sense yet, but they will by the end of the article.
</p><table><tr><td><p><b>Hyperdrive subsystem name</b></p></td><td><p><b>Brief description</b></p></td></tr><tr><td><p>Client</p></td><td><p>Lives on the same server as your Worker, talks directly to your database driver. This caches query results and sends queries to Endpoint if needed.</p></td></tr><tr><td><p>Endpoint</p></td><td><p>Lives in the data center nearest to your origin database, talks to your origin database. This caches query results and houses a pool of connections to your origin database.</p></td></tr><tr><td><p>Edge Validator</p></td><td><p>Sends a request to a Cloudflare data center to validate that Hyperdrive can connect to your origin database at time of creation.</p></td></tr><tr><td><p>Placement</p></td><td><p>Builds on top of Edge Validator to connect to your origin database from all eligible data centers, to identify which have the fastest connections.</p></td></tr></table><p>The first subsystem we want to dig into is named <code>Client. Client</code>’s first job is to pretend to be a database server. When a user’s Worker wants to connect to their database via Hyperdrive, they use a special connection string that the Worker runtime generates on the fly. This tells the Worker to reach out to a Hyperdrive process running on the same Cloudflare server, and direct all traffic to and from the database client to it.</p>
            <pre><code>import postgres from "postgres";

// Connect to Hyperdrive
const sql = postgres(env.HYPERDRIVE.connectionString);

// sql will now talk over an RPC channel to Hyperdrive, instead of via TCP to Postgres</code></pre>
            <p>Once this connection is established, the database driver will perform the usual handshake expected of it, with our <code>Client</code> playing the role of a database server and sending the appropriate responses. All of this happens on the same Cloudflare server running the Worker, and we observe that the p90 for all this is 4 ms (p50 is 2 ms). Quite a bit better than 625 ms, but how does that help? The query still needs to get to the database, right?</p><p><code>Client</code>’s second main job is to inspect the queries sent from a Worker, and decide whether they can be served from Cloudflare’s cache. We’ll talk more about that later on. Assuming that there are no cached query results available, <code>Client</code> will need to reach out to our second important subsystem, which we call <code>Endpoint</code>.</p>
    <div>
      <h2>In for the long haul</h2>
      <a href="#in-for-the-long-haul">
        
      </a>
    </div>
    <p>Before we dig into the role <code>Endpoint</code> plays, it’s worth talking more about how the <code>Client→Endpoint</code> connection works, because it’s a key piece of our solution. We have already talked a lot about the price of network round trips, and how a Worker might be quite far away from the origin database, so how does Hyperdrive handle the long trip from the <code>Client</code> running alongside their Worker to the <code>Endpoint</code> running near their database without expensive round trips?</p><p>This is accomplished with a very handy bit of Cloudflare’s networking infrastructure. When <code>Client</code> gets a cache miss, it will submit a request to our networking platform for a connection to whichever data center <code>Endpoint</code> is running on. This platform keeps a pool of ready TCP connections between all of Cloudflare’s data centers, such that we don’t need to do any preliminary handshakes to begin sending application-level traffic. You might say we put a connection pooler in our connection pooler.</p><p>Over this TCP connection, we send an initialization message that includes all of the buffered query messages the Worker has sent to <code>Client</code> (the mental model would be something like a <code>SYN</code> and a payload all bundled together). <code>Endpoint</code> will do its job processing this query, and respond by streaming the response back to <code>Client</code>, leaving the streaming channel open for any followup queries until <code>Client</code> disconnects. This approach allows us to send queries around the world with zero wasted round trips.</p>
    <div>
      <h2>Impersonating a database client</h2>
      <a href="#impersonating-a-database-client">
        
      </a>
    </div>
    <p><code>Endpoint</code> has a couple different jobs it has to do. Its first job is to pretend to be a database client, and to do the client half of the handshake shown above. Second, it must also do the same query processing that <code>Client</code> does with query messages. Finally, <code>Endpoint</code> will make the same determination on when it needs to reach out to the origin database to get uncached query results.</p><p>When <code>Endpoint</code> needs to query the origin database, it will attempt to take a connection out of a limited-size pool of database connections that it keeps. If there is an unused connection available, it is handed out from the pool and used to ferry the query to the origin database, and the results back to <code>Endpoint</code>. Once <code>Endpoint</code> has these results, the connection is immediately returned to the pool so that another <code>Client</code> can use it. These warm connections are usable in a matter of microseconds, which is obviously a dramatic improvement over the round trips from one region to another that a cold startup handshake would require.</p><p>If there are no currently unused connections sitting in the pool, it may start up a new one (assuming the pool has not already given out as many connections as it is allowed to). This set of handshakes looks exactly the same as the one <code>Client</code> does, but it happens across the network between a Cloudflare data center and wherever the origin database happens to be. These are the same 5 round trips as our original example, but instead of a full Chicago→London path on every single trip, perhaps it’s Virginia→London, or even London→London. Latency here will depend on which data center <code>Endpoint</code> is being housed in.</p>
    <div>
      <h2>Distributed choreography</h2>
      <a href="#distributed-choreography">
        
      </a>
    </div>
    <p>Earlier, we mentioned that Hyperdrive is a transaction-mode pooler. This means that when a driver is ready to send a query or open a transaction it must get a connection from the pool to use. The core challenge for a transaction-mode pooler is in aligning the state of the driver with the state of the connection checked out from the pool. For example, if the driver thinks it’s in a transaction, but the database doesn’t, then you might get errors or even corrupted results.</p><p>Hyperdrive achieves this by ensuring all connections are in the same state when they’re checked out of the pool: idle and ready for a query. Where Hyperdrive differs from other transaction-mode poolers is that it does this dance of matching up the states of two different connections across machines, such that there’s no need to share state between <code>Client</code> and <code>Endpoint</code>! Hyperdrive can terminate the incoming connection in <code>Client</code> on the same machine running the Worker, and pool the connections to the origin database wherever makes the most sense.</p><p>The job of a transaction-mode pooler is a hard one. Database connections are fundamentally stateful and keeping track of that state is important to maintain our guise when impersonating either a database client or a server. As an example, one of the trickier pieces of state to manage are <a href="https://www.postgresql.org/docs/current/protocol-overview.html#PROTOCOL-QUERY-CONCEPTS"><u>prepared statements</u></a>. When a user creates a new prepared statement, the prepared statement is only created on whichever database connection happened to be checked out at that time. Once the user finishes the transaction or query they are processing, the connection holding that statement is returned to the pool. From the user’s perspective they’re still connected using the same database connection, so a new query or transaction can reasonably expect to use that previously prepared statement. If a different connection is handed out for the next query and the query wants to make use of this resource, the pooler has to do something about it. We went into some depth on this topic in a <a href="https://blog.cloudflare.com/postgres-named-prepared-statements-supported-hyperdrive/"><u>previous blog post</u></a> when we released this feature, but in sum, the process looks like this:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2ibtnO4URpLJ6m3Nyd2kpW/331059a6fd18c7d70b95a15af8f57cd6/image2.png" />
          </figure><p>Hyperdrive implements this by keeping track of what statements have been prepared by a given client, as well as what statements have been prepared on each origin connection in the pool. When a query comes in expecting to re-use a particular prepared statement (#8 above), Hyperdrive checks if it’s been prepared on the checked-out origin connection. If it hasn’t, Hyperdrive will replay the wire-protocol message sequence to prepare it on the newly-checked-out origin connection (#10 above) before sending the query over it. Many little corrections like this are necessary to keep the client’s connection to Hyperdrive and Hyperdrive’s connection to the origin database lined up so that both sides see what they expect.</p>
    <div>
      <h2>Better, faster, smarter, closer</h2>
      <a href="#better-faster-smarter-closer">
        
      </a>
    </div>
    <p>This “split connection” approach is the founding innovation of Hyperdrive, and one of the most vital aspects of it is how it affects starting up new connections. While the same 5+ round trips must always happen on startup, the actual time spent on the round trips can be dramatically reduced by conducting them over the smallest possible distances. This impact of distance can be so big that there is still a huge latency reduction even though the startup round trips must now happen <i>twice</i> (once each between the Worker and <code>Client</code>, and <code>Endpoint</code> and your origin database). So how do we decide where to run everything, to lean into that advantage as much as possible?</p><p>The placement of <code>Client</code> has not really changed since the original design of Hyperdrive. Sharing a server with the Worker sending the queries means that the Worker runtime can connect directly to Hyperdrive with no network hop needed. While there is always room for microoptimizations, it’s hard to do much better than that from an architecture perspective.  By far the bigger piece of the latency puzzle is where to run <code>Endpoint</code>.</p><p>Hyperdrive keeps a list of data centers that are eligible to house <code>Endpoint</code>s, requiring that they have sufficient capacity and the best routes available for pooled connections to use. The key challenge to overcome here is that a <a href="https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING-URIS"><u>database connection string</u></a> does not tell you where in the world a database actually is. The reality is that reliably going from a hostname to a precise (enough) geographic location is a hard problem, even leaving aside the additional complexity of doing so <a href="https://blog.cloudflare.com/elephants-in-tunnels-how-hyperdrive-connects-to-databases-inside-your-vpc-networks/"><u>within a private network</u></a>. So how do we pick from that list of eligible data centers?</p><p>For much of the time since its launch, Hyperdrive solved this with a regional pool approach. When a Worker connected to Hyperdrive, the location of the Worker was used to infer what region the end user was connecting from (e.g. ENAM, WEUR, APAC, etc. — see a rough breakdown <a href="https://www.cloudflare.com/network/"><u>here</u></a>). Data centers to house <code>Endpoint</code>s for any given Hyperdrive were deterministically selected from that region’s list of eligible options using <a href="https://en.wikipedia.org/wiki/Rendezvous_hashing"><u>rendezvous hashing</u></a>, resulting in one pool of connections <i>per region</i>.</p><p>This approach worked well enough, but it had some severe shortcomings. The first and most obvious is that there’s no guarantee that the data center selected for a given region is actually closer to the origin database than the user making the request. This means that, while you’re getting the benefit of the excellent routing available on <a href="https://www.cloudflare.com/network/"><u>Cloudflare's network</u></a>, you may be going significantly out of your way to do so. The second downside is that, in the scenario where a new connection must be created, the round trips to do so may be happening over a significantly larger distance than is necessary if the origin database is in a different region than the <code>Endpoint</code> housing the regional connection pool. This increases latency and reduces throughput for the query that needs to instantiate the connection.</p><p>The final key downside here is an unfortunate interaction with <a href="https://developers.cloudflare.com/workers/configuration/smart-placement/"><u>Smart Placement</u></a>, a feature of Cloudflare Workers that analyzes the duration of your Worker requests to identify the data center to run your Worker in. With regional <code>Endpoint</code>s, the best Smart Placement can possibly do is to put your requests close to the <code>Endpoint</code> for whichever region the origin database is in. Again, there may be other data centers that are closer, but Smart Placement has no way to do better than where the <code>Endpoint</code> is because all Hyperdrive queries must route through it.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3y9r3Dwn6APp5Pw6kqkg0Z/3a9e202670b6c65a22294fe777064add/image6.png" />
          </figure><p>We recently <a href="https://developers.cloudflare.com/changelog/2025-03-04-hyperdrive-pooling-near-database-and-ip-range-egress/"><u>shipped some improvements</u></a> to this system that significantly enhanced performance. The new system discards the concept of regional pools entirely, in favor of a single global <code>Endpoint</code> for each Hyperdrive that is in the eligible data center as close as possible to the origin database.</p><p>The way we solved locating the origin database such that we can accomplish this was ultimately very straightforward. We already had a subsystem to confirm, at the time of creation, that Hyperdrive could connect to an origin database using the provided information. We call this subsystem our <code>Edge Validator</code>.</p><p>It’s bad user experience to allow someone to create a Hyperdrive, and then find out when they go to use it that they mistyped their password or something. Now they’re stuck trying to debug with extra layers in the way, with a Hyperdrive that can’t possibly work. Instead, whenever a Hyperdrive is created, the <code>Edge Validator</code> will send a request to an arbitrary data center to use its instance of Hyperdrive to connect to the origin database. If this connection fails, the creation of the Hyperdrive will also fail, giving immediate feedback to the user at the time it is most helpful.</p><p>With our new subsystem, affectionately called <code>Placement</code>, we now have a solution to the geolocation problem. After <code>Edge Validator</code> has confirmed that the provided information works and the Hyperdrive is created, an extra step is run in the background. <code>Placement</code> will perform the exact same connection routine, except instead of being done once from an arbitrary data center, it is run a handful of times from every single data center that is eligible to house <code>Endpoints</code>. The latency of establishing these connections is collected, and the average is sent back to a central instance of <code>Placement</code>. The data centers that can connect to the origin database the fastest are, by definition, where we want to run <code>Endpoint</code> for this Hyperdrive. The list of these is saved, and at runtime is used to select the <code>Endpoint</code> best suited to housing the pool of connections to the origin database.</p><p>Given that the secret sauce of Hyperdrive is in managing and minimizing the latency of establishing these connections, moving <code>Endpoint</code>s right next to their origin databases proved to be pretty impactful.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1MZpaxXjj4tlOAinZkDqOF/e1a555a47141ac11aa391d27806cbfbc/image9.png" />
          </figure><p><sup><i>Pictured: query latency as measured from Endpoint to origin databases. The backfill of Placement to existing customers was done in stages on 02/22 and 02/25.</i></sup></p>
    <div>
      <h2>Serverless drivers exist, though?</h2>
      <a href="#serverless-drivers-exist-though">
        
      </a>
    </div>
    <p>While we went in a different direction, it’s worth acknowledging that other teams have <a href="https://neon.tech/blog/quicker-serverless-postgres"><u>solved this same problem</u></a> with a very different approach. Custom database drivers, usually called “serverless drivers”, have made several optimization efforts to reduce both the number of round trips and how quickly they can be conducted, while still connecting directly from your client to your database in the traditional way. While these drivers are impressive, we chose not to go this route for a couple of reasons.</p><p>First off, a big part of the appeal of using Postgres is its <a href="https://www.lastweekinaws.com/podcast/screaming-in-the-cloud/the-ever-growing-ecosystem-of-postgres-with-alvaro-hernandez/"><u>vibrant ecosystem</u></a>. Odds are good you’ve used Postgres before, and it can probably help solve whichever problem you’re tackling with your newest project. This familiarity and shared knowledge across projects is an absolute superpower. We wanted to lean into this advantage by supporting the most popular drivers already in this ecosystem, instead of fragmenting it by adding a competing one.</p><p>Second, Hyperdrive also functions as a cache for individual queries (a bit of trivia: its name while still in Alpha was actually <code>sql-query-cache</code>). Doing this as effectively as possible for distributed users requires some clever positioning of where exactly the query results should be cached. One of the unique advantages of running a distributed service on Cloudflare’s network is that we have a lot of flexibility on where to run things, and can confidently surmount challenges like those. If we’re going to be playing three-card monte with where things are happening anyway, it makes the most sense to favor that route for solving the other problems we’re trying to tackle too.</p>
    <div>
      <h2>Pick your favorite cache pun</h2>
      <a href="#pick-your-favorite-cache-pun">
        
      </a>
    </div>
    <p>As we’ve <a href="https://blog.cloudflare.com/postgres-named-prepared-statements-supported-hyperdrive/"><u>talked about</u></a> in the past, Hyperdrive buffers protocol messages until it has enough information to know whether a query can be served from cache. In a post about how Hyperdrive works it would be a shame to skip talking about how exactly we cache query results, so let’s close by diving into that.</p><p>First and foremost, Hyperdrive uses <a href="https://developers.cloudflare.com/cache/"><u>Cloudflare's cache</u></a>, because when you have technology like that already available to you, it’d be silly not to use it. This has some implications for our architecture that are worth exploring.</p><p>The cache exists in each of Cloudflare’s data centers, and by default these are separate instances. That means that a <code>Client</code> operating close to the user has one, and an <code>Endpoint</code> operating close to the origin database has one. However, historically we weren’t able to take full advantage of that, because the logic for interacting with cache was tightly bound to the logic for managing the pool of connections.</p><p>Part of our recent architecture refactoring effort, where we switched to global <code>Endpoint</code>s, was to split up this logic such that we can take advantage of <code>Client</code>’s cache too. This was necessary because, with <code>Endpoint</code> moving to a single location for each Hyperdrive, users from other regions would otherwise have gotten cache hits served from almost as far away as the origin.</p><p>With the new architecture, the role of <code>Client</code> during active query handling transitioned from that of a “dumb pipe” to more like what <code>Endpoint</code> had always been doing. It now buffers protocol messages, and serves results from cache if possible. In those scenarios, Hyperdrive’s traffic never leaves the data center that the Worker is running in, reducing query latencies from 20-70 ms to an average of around 4 ms. As a side benefit, it also substantially reduces the network bandwidth Hyperdrive uses to serve these queries. A win-win!</p><p>In the scenarios where query results can’t be served from the cache in <code>Client</code>’s data center, all is still not lost. <code>Endpoint</code> may also have cached results for this query, because it can field traffic from many different <code>Client</code>s around the world. If so, it will provide these results back to <code>Client</code>, along with how much time is remaining before they expire, such that <code>Client</code> can both return them and store them correctly into its own cache. Likewise, if <code>Endpoint</code> does need to go to the origin database for results, they will be stored into both <code>Client</code> and <code>Endpoint</code> caches. This ensures that followup queries from that same <code>Client</code> data center will get the happy path with single-digit ms response times, and also reduce load on the origin database from any other <code>Client</code>’s queries. This functions similarly to how <a href="https://developers.cloudflare.com/cache/how-to/tiered-cache/"><u>Cloudflare's Tiered Cache</u></a> works, with <code>Endpoint</code>’s cache functioning as a final layer of shielding for the origin database.</p>
    <div>
      <h2>Come on in, the water’s fine!</h2>
      <a href="#come-on-in-the-waters-fine">
        
      </a>
    </div>
    <p>With this announcement of a Free Plan for Hyperdrive, and newly armed with the knowledge of how it works under the hood, we hope you’ll enjoy building your next project with it! You can get started with a single Wrangler command (or using the dashboard):</p>
            <pre><code>wrangler hyperdrive create postgres-hyperdrive 
--connection-string="postgres://user:password@db-host.example.com:5432/defaultdb"</code></pre>
            <p>We’ve also included a Deploy to Cloudflare button below to let you get started with a sample Worker app using Hyperdrive, just bring your existing Postgres database! If you have any questions or ideas for future improvements, please feel free to visit our <a href="https://discord.com/channels/595317990191398933/1150557986239021106"><u>Discord channel!</u></a></p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/postgres-hyperdrive-template"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p></p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Deep Dive]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Hyperdrive]]></category>
            <category><![CDATA[Smart Placement]]></category>
            <category><![CDATA[SQL]]></category>
            <guid isPermaLink="false">3YedZXQKWaCm2jUQPvAeQv</guid>
            <dc:creator>Andrew Repp</dc:creator>
            <dc:creator>Matt Alonso</dc:creator>
        </item>
        <item>
            <title><![CDATA[Build global MySQL apps using Cloudflare Workers and Hyperdrive]]></title>
            <link>https://blog.cloudflare.com/building-global-mysql-apps-with-cloudflare-workers-and-hyperdrive/</link>
            <pubDate>Tue, 08 Apr 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ MySQL comes to Cloudflare Workers and Hyperdrive: MySQL drivers and ORMs are now compatible with Workers runtime, and Hyperdrive allow you to connect to your regional database from Workers. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today, we’re announcing support for MySQL in Cloudflare Workers and Hyperdrive. You can now build applications on Workers that connect to your MySQL databases directly, no matter where they’re hosted, with native MySQL drivers, and with optimal performance. </p><p>Connecting to MySQL databases from Workers has been an area we’ve been focusing on <a href="https://blog.cloudflare.com/relational-database-connectors/"><u>for quite some time</u></a>. We want you to build your apps on Workers with your existing data, even if that data exists in a SQL database in us-east-1. But connecting to traditional SQL databases from Workers has been challenging: it requires making stateful connections to regional databases with drivers that haven’t been designed for <a href="https://blog.cloudflare.com/workerd-open-source-workers-runtime/"><u>the Workers runtime</u></a>. </p><p>After multiple attempts at solving this problem for Postgres, <a href="https://developers.cloudflare.com/hyperdrive/"><u>Hyperdrive</u></a> emerged as our solution that provides the best of both worlds: it supports existing database drivers and libraries while also providing best-in-class performance. And it’s such a critical part of connecting to databases from Workers that we’re making it free (check out the <a href="https://blog.cloudflare.com/how-hyperdrive-speeds-up-database-access"><u>Hyperdrive free tier announcement</u></a>).</p><p>With <a href="http://blog.cloudflare.com/full-stack-development-on-cloudflare-workers"><u>new Node.js compatibility improvements</u></a> and <a href="http://developers.cloudflare.com/changelog/2025-04-08-hyperdrive-mysql-support/"><u>Hyperdrive support for the MySQL wire protocol</u></a>, we’re happy to say MySQL support for Cloudflare Workers has been achieved. If you want to jump into the code and have a MySQL database on hand, this “Deploy to Cloudflare” button will get you setup with a deployed project and will create a repository so you can dig into the code. </p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/mysql-hyperdrive-template"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p>Read on to learn more about how we got MySQL to work on Workers, and why Hyperdrive is critical to making connectivity to MySQL databases fast.</p>
    <div>
      <h2>Getting MySQL to work on Workers</h2>
      <a href="#getting-mysql-to-work-on-workers">
        
      </a>
    </div>
    <p>Until recently, connecting to MySQL databases from Workers was not straightforward. While it’s been possible to make TCP connections from Workers <a href="https://blog.cloudflare.com/workers-tcp-socket-api-connect-databases/"><u>for some time</u></a>, MySQL drivers had many dependencies on Node.js that weren’t available on the Workers runtime, and that prevented their use.</p><p>This led to workarounds being developed. PlanetScale provided a <a href="https://planetscale.com/blog/introducing-the-planetscale-serverless-driver-for-javascript"><u>serverless driver for JavaScript</u></a>, which communicates with PlanetScale servers using HTTP instead of TCP to relay database messages. In a separate effort, a <a href="https://github.com/nora-soderlund/cloudflare-mysql"><u>fork</u></a> of the <a href="https://www.npmjs.com/package/mysql"><u>mysql</u></a> package was created to polyfill the missing Node.js dependencies and modify the <code>mysql</code> package to work on Workers. </p><p>These solutions weren’t perfect. They required using new libraries that either did not provide the level of support expected for production applications, or provided solutions that were limited to certain MySQL hosting providers. They also did not integrate with existing codebases and tooling that depended on the popular MySQL drivers (<a href="https://www.npmjs.com/package/mysql"><u>mysql</u></a> and <a href="https://www.npmjs.com/package/mysql2"><u>mysql2</u></a>). In our effort to enable all JavaScript developers to build on Workers, we knew that we had to support these drivers.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3s4E4XVAbvqRwyk6aITm1h/cb9700eae49d2593c9a834fc7a09018e/1.png" />
          </figure><p><sup><i>Package downloads from </i></sup><a href="https://www.npmjs.com/"><sup><i><u>npm</u></i></sup></a><sup><i> for </i></sup><a href="https://www.npmjs.com/package/mysql"><sup><i><u>mysql</u></i></sup></a><sup><i> and </i></sup><a href="https://www.npmjs.com/package/mysql2"><sup><i><u>mysql2</u></i></sup></a></p><p>Improving our Node.js compatibility story was critical to get these MySQL drivers working on our platform. We first identified <a href="https://nodejs.org/api/net.html"><u>net</u></a> and <a href="https://nodejs.org/api/stream.html"><u>stream</u></a> as APIs that were needed by both drivers. This, complemented by Workers’ <a href="https://blog.cloudflare.com/more-npm-packages-on-cloudflare-workers-combining-polyfills-and-native-code/"><u>nodejs_compat</u></a> to resolve unused Node.js dependencies with <a href="https://github.com/unjs/unenv"><code><u>unenv</u></code></a>, enabled the <a href="https://www.npmjs.com/package/mysql"><u>mysql</u></a> package to work on Workers:</p>
            <pre><code>import { createConnection } from 'mysql';

export default {
 async fetch(request, env, ctx): Promise&lt;Response&gt; {
    const result = await new Promise&lt;any&gt;((resolve) =&gt; {

       const connection = createConnection({
         host: env.DB_HOST,
         user: env.DB_USER,
         password: env.DB_PASSWORD,
         database: env.DB_NAME,
	  port: env.DB_PORT
       });

       connection.connect((error: { message : string }) =&gt; {
          if(error) {
            throw new Error(error.message);
          }
          
          connection.query("SHOW tables;", [], (error, rows, fields) =&gt; {
          connection.end();
         
          resolve({ fields, rows });
        });
       });

      });

     return new Response(JSON.stringify(result), {
       headers: {
         'Content-Type': 'application/json',
       },
     });
 },
} satisfies ExportedHandler&lt;Env&gt;;</code></pre>
            <p>Further work was required to get <a href="https://www.npmjs.com/package/mysql2"><u>mysql2</u></a> working: dependencies on Node.js <a href="https://nodejs.org/api/timers.html"><u>timers</u></a> and the JavaScript <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/eval"><u>eval</u></a> API remained. While we were able to land support for <a href="https://github.com/cloudflare/workerd/pull/3346"><u>timers</u></a> in the Workers runtime, <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/eval"><u>eval</u></a> was not an API that we could securely enable in the Workers runtime at this time. </p><p><a href="https://www.npmjs.com/package/mysql2"><u>mysql2</u></a> uses <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/eval"><u>eval</u></a> to optimize the parsing of MySQL results containing large rows with more than 100 columns (see <a href="https://github.com/sidorares/node-mysql2/issues/2055#issuecomment-1614222188"><u>benchmarks</u></a>). This blocked the driver from working on Workers, since the Workers runtime does not support this module. Luckily, <a href="https://github.com/sidorares/node-mysql2/pull/2289"><u>prior effort existed</u></a> to get mysql2 working on Workers using static parsers for handling text and binary MySQL data types without using <code>eval()</code>, which provides similar performance for a majority of scenarios. </p><p>In <a href="https://www.npmjs.com/package/mysql2"><u>mysql2</u></a> version <code>3.13.0</code>, a new option to disable the use of <code>eval()</code> was released to make it possible to use the driver in Cloudflare Workers:</p>
            <pre><code>import { createConnection  } from 'mysql2/promise';

export default {
 async fetch(request, env, ctx): Promise&lt;Response&gt; {
    const connection = await createConnection({
     host: env.DB_HOST,
     user: env.DB_USER,
     password: env.DB_PASSWORD,
     database: env.DB_NAME,
     port: env.DB_PORT

     // The following line is needed for mysql2 to work on Workers (as explained above)
     // mysql2 uses eval() to optimize result parsing for rows with &gt; 100 columns
     // eval() is not available in Workers due to runtime limitations
     // Configure mysql2 to use static parsing with disableEval
     disableEval: true
   });

   const [results, fields] = await connection.query(
     'SHOW tables;'
   );

   return new Response(JSON.stringify({ results, fields }), {
     headers: {
       'Content-Type': 'application/json',
       'Access-Control-Allow-Origin': '*',
     },
   });
 },
} satisfies ExportedHandler&lt;Env&gt;;</code></pre>
            <p>So, with these efforts, it is now possible to connect to MySQL from Workers. But, getting the MySQL drivers working on Workers was only half of the battle. To make MySQL on Workers performant for production uses, we needed to make it possible to connect to MySQL databases with Hyperdrive.</p>
    <div>
      <h2>Supporting MySQL in Hyperdrive</h2>
      <a href="#supporting-mysql-in-hyperdrive">
        
      </a>
    </div>
    <p>If you’re a MySQL developer, <a href="https://developers.cloudflare.com/hyperdrive/"><u>Hyperdrive</u></a> may be new to you. Hyperdrive solves a core problem: connecting from Workers to regional SQL databases is slow. Database drivers <a href="https://blog.cloudflare.com/how-hyperdrive-speeds-up-database-access/"><u>require many roundtrips</u></a> to establish a connection to a database. Without the ability to reuse these connections between Worker invocations, a lot of unnecessary latency is added to your application. </p><p>Hyperdrive solves this problem by pooling connections to your database globally and eliminating unnecessary roundtrips for connection setup. As a plus, Hyperdrive also provides integrated caching to offload popular queries from your database. We wrote an entire deep dive on how Hyperdrive does this, which you should definitely <a href="https://blog.cloudflare.com/how-hyperdrive-speeds-up-database-access/"><u>check out</u></a>. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/R8bfw57o8KEXHD7EstTmP/eee5182beb931373c25a1c42f5dd0ce3/2.png" />
          </figure><p>Getting Hyperdrive to support MySQL was critical for us to be able to say “Connect from Workers to MySQL databases”. That’s easier said than done. To support a new database type, Hyperdrive needs to be able to parse the wire protocol of the database in question, in this case, the <a href="https://dev.mysql.com/doc/dev/mysql-server/8.4.3/PAGE_PROTOCOL.html"><u>MySQL protocol</u></a>. Once this is accomplished, Hyperdrive can extract queries from protocol messages, cache results across Cloudflare locations, relay messages to a datacenter close to your database, and pool connections reliably close to your origin database. </p><p>Adapting Hyperdrive to parse a new language, MySQL protocol, is a challenge in its own right. But it also presented some notable differences with Postgres. While the intricacies are beyond the scope of this post, the differences in MySQL’s <a href="https://dev.mysql.com/doc/refman/8.4/en/authentication-plugins.html"><u>authentication plugins</u></a> across providers and how MySQL’s connection handshake uses <a href="https://dev.mysql.com/doc/dev/mysql-server/latest/group__group__cs__capabilities__flags.html"><u>capability flags</u></a> required some adaptation of Hyperdrive. In the end, we leveraged the experience we gained in building Hyperdrive for Postgres to iterate on our support for MySQL. And we’re happy to announce MySQL support is available for Hyperdrive, with all of the <a href="https://developers.cloudflare.com/changelog/2025-03-04-hyperdrive-pooling-near-database-and-ip-range-egress/"><u>performance</u></a> <a href="https://developers.cloudflare.com/changelog/2024-12-11-hyperdrive-caching-at-edge/"><u>improvements</u></a> we’ve made to Hyperdrive available from the get-go!</p><p>Now, you can create new Hyperdrive configurations for MySQL databases hosted anywhere (we’ve tested MySQL and MariaDB databases from AWS (including AWS Aurora), GCP, Azure, PlanetScale, and self-hosted databases). You can create Hyperdrive configurations for your MySQL databases from the dashboard or the <a href="https://developers.cloudflare.com/workers/wrangler/"><u>Wrangler CLI</u></a>:</p>
            <pre><code>wrangler hyperdrive create mysql-hyperdrive 
--connection-string="mysql://user:password@db-host.example.com:3306/defaultdb"</code></pre>
            <p>In your Wrangler configuration file, you’ll need to set your Hyperdrive binding to the ID of the newly created Hyperdrive configuration as well as set Node.js compatibility flags:</p>
            <pre><code>{
 "$schema": "node_modules/wrangler/config-schema.json",
 "name": "workers-mysql-template",
 "main": "src/index.ts",
 "compatibility_date": "2025-03-10",
 "observability": {
   "enabled": true
 },
 "compatibility_flags": [
   "nodejs_compat"
 ],
 "hyperdrive": [
   {
     "binding": "HYPERDRIVE",
     "id": "&lt;HYPERDRIVE_CONFIG_ID&gt;"
   }
 ]
}</code></pre>
            <p>From your Cloudflare Worker, the Hyperdrive binding provides you with custom connection credentials that connect to your Hyperdrive configuration. From there onward, all of your queries and database messages will be routed to your origin database by Hyperdrive, leveraging Cloudflare’s network to speed up routing.</p>
            <pre><code>import { createConnection  } from 'mysql2/promise';

export interface Env {
 HYPERDRIVE: Hyperdrive;
}

export default {
 async fetch(request, env, ctx): Promise&lt;Response&gt; {
  
   // Hyperdrive provides new connection credentials to use with your existing drivers
   const connection = await createConnection({
     host: env.HYPERDRIVE.host,
     user: env.HYPERDRIVE.user,
     password: env.HYPERDRIVE.password,
     database: env.HYPERDRIVE.database,
     port: env.HYPERDRIVE.port,

     // Configure mysql2 to use static parsing (as explained above in Part 1)
     disableEval: true 
   });

   const [results, fields] = await connection.query(
     'SHOW tables;'
   );

   return new Response(JSON.stringify({ results, fields }), {
     headers: {
       'Content-Type': 'application/json',
       'Access-Control-Allow-Origin': '*',
     },
   });
 },
} satisfies ExportedHandler&lt;Env&gt;;</code></pre>
            <p>As you can see from this code snippet, you only need to swap the credentials in your JavaScript code for those provided by Hyperdrive to migrate your existing code to Workers. No need to change the ORMs or drivers you’re using! </p>
    <div>
      <h2>Get started building with MySQL and Hyperdrive</h2>
      <a href="#get-started-building-with-mysql-and-hyperdrive">
        
      </a>
    </div>
    <p>MySQL support for Workers and Hyperdrive has been long overdue and we’re excited to see what you build. We published a template for you to get started building your MySQL applications on Workers with Hyperdrive:</p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/mysql-hyperdrive-template"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p>As for what’s next, we’re going to continue iterating on our support for MySQL during the beta to support more of the MySQL protocol and MySQL-compatible databases. We’re also going to continue to expand the feature set of Hyperdrive to make it more flexible for your full-stack workloads and more performant for building full-stack global apps on Workers.</p><p>Finally, whether you’re using MySQL, PostgreSQL, or any of the other compatible databases, we think you should be using Hyperdrive to get the best performance. And because we want to enable you to build on Workers regardless of your preferred database, <a href="https://blog.cloudflare.com/how-hyperdrive-speeds-up-database-access"><u>we’re making Hyperdrive available to the Workers free plan. </u></a></p><p>We want to hear your feedback on MySQL, Hyperdrive, and building global applications with Workers. Join the #hyperdrive channel in our <a href="http://discord.cloudflare.com/"><u>Developer Discord</u></a> to ask questions, share what you’re building, and talk to our Product &amp; Engineering teams directly.</p><p>Thank you to <a href="https://github.com/wellwelwel"><u>Weslley Araújo</u></a>, <a href="https://github.com/sidorares"><u>Andrey Sidorov</u></a>, <a href="https://github.com/shiyuhang0"><u>Shi Yuhang</u></a>, <a href="https://github.com/Mini256"><u>Zhiyuan Liang</u></a>, <a href="https://github.com/nora-soderlund"><u>Nora Söderlund</u></a> and other open-source contributors who helped push this initiative forward.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[MySQL]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Hyperdrive]]></category>
            <guid isPermaLink="false">77Nbj8Tnnrr6vMWsDSFekZ</guid>
            <dc:creator>Thomas Gauvin</dc:creator>
            <dc:creator>Andrew Repp</dc:creator>
            <dc:creator>Kirk Nickish</dc:creator>
            <dc:creator>Yagiz Nizipli</dc:creator>
        </item>
        <item>
            <title><![CDATA[Elephants in tunnels: how Hyperdrive connects to databases inside your VPC networks]]></title>
            <link>https://blog.cloudflare.com/elephants-in-tunnels-how-hyperdrive-connects-to-databases-inside-your-vpc-networks/</link>
            <pubDate>Fri, 25 Oct 2024 13:00:00 GMT</pubDate>
            <description><![CDATA[ Hyperdrive (Cloudflare’s globally distributed SQL connection pooler and cache) recently added support for directing database traffic from Workers across Cloudflare Tunnels. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>With September’s <a href="https://blog.cloudflare.com/builder-day-2024-announcements/#connect-to-private-databases-from-workers"><u>announcement</u></a> of Hyperdrive’s ability to send database traffic from <a href="https://workers.cloudflare.com/"><u>Workers</u></a> over <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/"><u>Cloudflare Tunnels</u></a>, we wanted to dive into the details of what it took to make this happen.</p>
    <div>
      <h2>Hyper-who?</h2>
      <a href="#hyper-who">
        
      </a>
    </div>
    <p>Accessing your data from anywhere in Region Earth can be hard. Traditional databases are powerful, familiar, and feature-rich, but your users can be thousands of miles away from your database. This can cause slower connection startup times, slower queries, and connection exhaustion as everything takes longer to accomplish.</p><p><a href="https://developers.cloudflare.com/workers/"><u>Cloudflare Workers</u></a> is an incredibly lightweight runtime, which enables our customers to deploy their applications globally by default and renders the <a href="https://en.wikipedia.org/wiki/Cold_start_(computing)"><u>cold start</u></a> problem almost irrelevant. The trade-off for these light, ephemeral execution contexts is the lack of persistence for things like database connections. Database connections are also notoriously expensive to spin up, with many round trips required between client and server before any query or result bytes can be exchanged.</p><p><a href="https://blog.cloudflare.com/hyperdrive-making-regional-databases-feel-distributed"><u>Hyperdrive</u></a> is designed to make the centralized databases you already have feel like they’re global while keeping connections to those databases hot. We use our <a href="https://www.cloudflare.com/network/"><u>global network</u></a> to get faster routes to your database, keep connection pools primed, and cache your most frequently run queries as close to users as possible.</p>
    <div>
      <h2>Why a Tunnel?</h2>
      <a href="#why-a-tunnel">
        
      </a>
    </div>
    <p>For something as sensitive as your database, exposing access to the public Internet can be uncomfortable. It is common to instead host your database on a private network, and allowlist known-safe IP addresses or configure <a href="https://www.cloudflare.com/learning/network-layer/what-is-gre-tunneling/"><u>GRE tunnels</u></a> to permit traffic to it. This is complex, toilsome, and error-prone. </p><p>On Cloudflare’s <a href="https://www.cloudflare.com/en-gb/developer-platform/"><u>Developer Platform</u></a>, we strive for simplicity and ease-of-use. We cannot expect all of our customers to be experts in configuring networking solutions, and so we went in search of a simpler solution. <a href="https://www.cloudflare.com/the-net/top-of-mind-security/customer-zero/"><u>Being your own customer</u></a> is rarely a bad choice, and it so happens that Cloudflare offers an excellent option for this scenario: Tunnels.</p><p><a href="https://www.cloudflare.com/products/tunnel/"><u>Cloudflare Tunnel</u></a> is a Zero Trust product that creates a secure connection between your private network and Cloudflare. Exposing services within your private network can be as simple as <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/downloads/"><u>running a </u><code><u>cloudflared</u></code><u> binary</u></a>, or deploying a Docker container running the <a href="https://hub.docker.com/r/cloudflare/cloudflared"><code><u>cloudflared</u></code><u> image we distribute</u></a>. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3182f43rbwdH9krF1xhdlC/d22430cdb1efa134031f94fea691c36e/image1.png" />
          </figure>
    <div>
      <h2>A custom handler and generic streams</h2>
      <a href="#a-custom-handler-and-generic-streams">
        
      </a>
    </div>
    <p>Integrating with Tunnels to support sending Postgres directly through them was a bit of a new challenge for us. Most of the time, when we use Tunnels internally (more on that later!), we rely on the excellent job <code>cloudflared</code> does of handling all of the mechanics, and we just treat them as pipes. That wouldn’t work for Hyperdrive, though, so we had to dig into how Tunnels actually ingress traffic to build a solution.</p><p>Hyperdrive handles Postgres traffic using an entirely custom implementation of the <a href="https://www.postgresql.org/docs/current/protocol.html"><u>Postgres message protocol</u></a>. This is necessary, because we sometimes have to <a href="https://blog.cloudflare.com/postgres-named-prepared-statements-supported-hyperdrive"><u>alter the specific type or content</u></a> of messages sent from client to server, or vice versa. Handling individual bytes gives us the flexibility to implement whatever logic any new feature might need.</p><p>An additional, perhaps less obvious, benefit of handling Postgres message traffic as just bytes is that we are not bound to the transport layer choices of some <a href="https://en.wikipedia.org/wiki/Object%E2%80%93relational_mapping"><u>ORM</u></a> or library. One of the nuances of running services in Cloudflare is that we may want to egress traffic over different services or protocols, for a variety of different reasons. In this case, being able to egress traffic via a Tunnel would be pretty challenging if we were stuck with whatever raw TCP socket a library had established for us.</p><p>The way we accomplish this relies on a mainstay of Rust: <a href="https://doc.rust-lang.org/book/ch10-02-traits.html"><u>traits</u></a> (which are how Rust lets developers apply logic across generic functions and types). In the Rust ecosystem, there are two traits that define the behavior Hyperdrive wants out of its transport layers: <a href="https://docs.rs/tokio/latest/tokio/io/trait.AsyncRead.html"><code><u>AsyncRead</u></code></a> and <a href="https://docs.rs/tokio/latest/tokio/io/trait.AsyncWrite.html"><code><u>AsyncWrite</u></code></a>. There are a couple of others we also need, but we’re going to focus on just these two. These traits enable us to code our entire custom handler against a generic stream of data, without the handler needing to know anything about the underlying protocol used to implement the stream. So, we can pass around a WebSocket connection as a generic I/O stream, wherever it might be needed.</p><p>As an example, the code to create a generic TCP stream and send a Postgres startup message across it might look like this:</p>
            <pre><code>/// Send a startup message to a Postgres server, in the role of a PG client.
/// https://www.postgresql.org/docs/current/protocol-message-formats.html#PROTOCOL-MESSAGE-FORMATS-STARTUPMESSAGE
pub async fn send_startup&lt;S&gt;(stream: &amp;mut S, user_name: &amp;str, db_name: &amp;str, app_name: &amp;str) -&gt; Result&lt;(), ConnectionError&gt;
where
    S: AsyncWrite + Unpin,
{
    let protocol_number = 196608 as i32;
    let user_str = &amp;b"user\0"[..];
    let user_bytes = user_name.as_bytes();
    let db_str = &amp;b"database\0"[..];
    let db_bytes = db_name.as_bytes();
    let app_str = &amp;b"application_name\0"[..];
    let app_bytes = app_name.as_bytes();
    let len = 4 + 4
        + user_str.len() + user_bytes.len() + 1
        + db_str.len() + db_bytes.len() + 1
        + app_str.len() + app_bytes.len() + 1 + 1;

    // Construct a BytesMut of our startup message, then send it
    let mut startup_message = BytesMut::with_capacity(len as usize);
    startup_message.put_i32(len as i32);
    startup_message.put_i32(protocol_number);
    startup_message.put(user_str);
    startup_message.put_slice(user_bytes);
    startup_message.put_u8(0);
    startup_message.put(db_str);
    startup_message.put_slice(db_bytes);
    startup_message.put_u8(0);
    startup_message.put(app_str);
    startup_message.put_slice(app_bytes);
    startup_message.put_u8(0);
    startup_message.put_u8(0);

    match stream.write_all(&amp;startup_message).await {
        Ok(_) =&gt; Ok(()),
        Err(err) =&gt; {
            error!("Error writing startup to server: {}", err.to_string());
            ConnectionError::InternalError
        }
    }
}

/// Connect to a TCP socket
let stream = match TcpStream::connect(("localhost", 5432)).await {
    Ok(s) =&gt; s,
    Err(err) =&gt; {
        error!("Error connecting to address: {}", err.to_string());
        return ConnectionError::InternalError;
    }
};
let _ = send_startup(&amp;mut stream, "db_user", "my_db").await;</code></pre>
            <p>With this approach, if we wanted to encrypt the stream using <a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/#:~:text=Transport%20Layer%20Security%2C%20or%20TLS,web%20browsers%20loading%20a%20website."><u>TLS</u></a> before we write to it (upgrading our existing <code>TcpStream</code> connection in-place, to an <code>SslStream</code>), we would only have to change the code we use to create the stream, while generating and sending the traffic would remain unchanged. This is because <code>SslStream</code> also implements <code>AsyncWrite</code>!</p>
            <pre><code>/// We're handwaving the SSL setup here. You're welcome.
let conn_config = new_tls_client_config()?;

/// Encrypt the TcpStream, returning an SslStream
let ssl_stream = match tokio_boring::connect(conn_config, domain, stream).await {
    Ok(s) =&gt; s,
    Err(err) =&gt; {
        error!("Error during websocket TLS handshake: {}", err.to_string());
        return ConnectionError::InternalError;
    }
};
let _ = send_startup(&amp;mut ssl_stream, "db_user", "my_db").await;</code></pre>
            
    <div>
      <h2>Whence WebSocket</h2>
      <a href="#whence-websocket">
        
      </a>
    </div>
    <p><a href="https://datatracker.ietf.org/doc/html/rfc6455"><u>WebSocket</u></a> is an application layer protocol that enables bidirectional communication between a client and server. Typically, to establish a WebSocket connection, a client initiates an HTTP request and indicates they wish to upgrade the connection to WebSocket via the “Upgrade” header. Then, once the client and server complete the handshake, both parties can send messages over the connection until one of them terminates it.</p><p>Now, it turns out that the way Cloudflare Tunnels work under the hood is that both ends of the tunnel want to speak WebSocket, and rely on a translation layer to convert all traffic to or from WebSocket. The <code>cloudflared</code> daemon you spin up within your private network handles this for us! For Hyperdrive, however, we did not have a suitable translation layer to send Postgres messages across WebSocket, and had to write one.</p><p>One of the (many) fantastic things about Rust traits is that the contract they present is very clear. To be <code>AsyncRead</code>, you just need to implement <code>poll_read</code>. To be <code>AsyncWrite</code>, you need to implement only three functions (<code>poll_write</code>, <code>poll_flush</code>, and <code>poll_shutdown</code>). Further, there is excellent support for WebSocket in Rust built on top of the <a href="https://github.com/snapview/tungstenite-rs"><u>tungstenite-rs library</u></a>.</p><p>Thus, building our custom WebSocket stream such that it can share the same machinery as all our other generic streams just means translating the existing WebSocket support into these poll functions. There are some existing OSS projects that do this, but for multiple reasons we could not use the existing options. The primary reason is that Hyperdrive operates across multiple threads (thanks to the <a href="https://docs.rs/tokio/latest/tokio/runtime/index.html"><u>tokio runtime</u></a>), and so we rely on our connections to also handle <a href="https://doc.rust-lang.org/std/marker/trait.Send.html"><code><u>Send</u></code></a>, <a href="https://doc.rust-lang.org/std/marker/trait.Sync.html"><code><u>Sync</u></code></a>, and <a href="https://doc.rust-lang.org/std/marker/trait.Unpin.html"><code><u>Unpin</u></code></a>. None of the available solutions had all five traits handled. It turns out that most of them went with the paradigm of <a href="https://docs.rs/futures/latest/futures/sink/trait.Sink.html"><code><u>Sink</u></code></a> and <a href="https://docs.rs/futures/latest/futures/stream/trait.Stream.html"><code><u>Stream</u></code></a>, which provide a solid base from which to translate to <code>AsyncRead</code> and <code>AsyncWrite</code>. In fact some of the functions overlap, and can be passed through almost unchanged. For example, <code>poll_flush</code> and <code>poll_shutdown</code> have 1-to-1 analogs, and require almost no engineering effort to convert from <code>Sink</code> to <code>AsyncWrite</code>.</p>
            <pre><code>/// We use this struct to implement the traits we need on top of a WebSocketStream
pub struct HyperSocket&lt;S&gt;
where
    S: AsyncRead + AsyncWrite + Send + Sync + Unpin,
{
    inner: WebSocketStream&lt;S&gt;,
    read_state: Option&lt;ReadState&gt;,
    write_err: Option&lt;Error&gt;,
}

impl&lt;S&gt; AsyncWrite for HyperSocket&lt;S&gt;
where
    S: AsyncRead + AsyncWrite + Send + Sync + Unpin,
{
    fn poll_flush(mut self: Pin&lt;&amp;mut Self&gt;, cx: &amp;mut Context&lt;'_&gt;) -&gt; Poll&lt;io::Result&lt;()&gt;&gt; {
        match ready!(Pin::new(&amp;mut self.inner).poll_flush(cx)) {
            Ok(_) =&gt; Poll::Ready(Ok(())),
            Err(err) =&gt; Poll::Ready(Err(Error::new(ErrorKind::Other, err))),
        }
    }

    fn poll_shutdown(mut self: Pin&lt;&amp;mut Self&gt;, cx: &amp;mut Context&lt;'_&gt;) -&gt; Poll&lt;io::Result&lt;()&gt;&gt; {
        match ready!(Pin::new(&amp;mut self.inner).poll_close(cx)) {
            Ok(_) =&gt; Poll::Ready(Ok(())),
            Err(err) =&gt; Poll::Ready(Err(Error::new(ErrorKind::Other, err))),
        }
    }
}
</code></pre>
            <p>With that translation done, we can use an existing WebSocket library to upgrade our <code>SslStream</code> connection to a Cloudflare Tunnel, and wrap the result in our <code>AsyncRead/AsyncWrite</code> implementation. The result can then be used anywhere that our other transport streams would work, without any changes needed to the rest of our codebase! </p><p>That would look something like this:</p>
            <pre><code>let websocket = match tokio_tungstenite::client_async(request, ssl_stream).await {
    Ok(ws) =&gt; Ok(ws),
    Err(err) =&gt; {
        error!("Error during websocket conn setup: {}", err.to_string());
        return ConnectionError::InternalError;
    }
};
let websocket_stream = HyperSocket::new(websocket));
let _ = send_startup(&amp;mut websocket_stream, "db_user", "my_db").await;</code></pre>
            
    <div>
      <h2>Access granted</h2>
      <a href="#access-granted">
        
      </a>
    </div>
    <p>An observant reader might have noticed that in the code example above we snuck in a variable named request that we passed in when upgrading from an <code>SslStream to a WebSocketStream</code>. This is for multiple reasons. The first reason is that Tunnels are assigned a hostname and use this hostname for routing. The second and more interesting reason is that (as mentioned above) when negotiating an upgrade from HTTP to WebSocket, a request must be sent to the server hosting the ingress side of the Tunnel to <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Protocol_upgrade_mechanism"><u>perform the upgrade</u></a>. This is pretty universal, but we also add in an extra piece here.</p><p>At Cloudflare, we believe that <a href="https://blog.cloudflare.com/secure-by-default-understanding-new-cisa-guide/"><u>secure defaults</u></a> and <a href="https://www.cloudflare.com/learning/security/glossary/what-is-defense-in-depth/"><u>defense in depth</u></a> are the correct ways to build a better Internet. This is why traffic across Tunnels is encrypted, for example. However, that does not necessarily prevent unwanted traffic from being sent into your Tunnel, and therefore egressing out to your database. While Postgres offers a robust set of <a href="https://www.postgresql.org/docs/current/user-manag.html"><u>access control</u></a> options for protecting your database, wouldn’t it be best if unwanted traffic never got into your private network in the first place? </p><p>To that end, all <a href="https://developers.cloudflare.com/hyperdrive/configuration/connect-to-private-database/"><u>Tunnels set up for use with Hyperdrive</u></a> should have a <a href="https://developers.cloudflare.com/cloudflare-one/applications/"><u>Zero Trust Access Application</u></a> configured to protect them. These applications should use a <a href="https://developers.cloudflare.com/cloudflare-one/identity/service-tokens/"><u>Service Token</u></a> to authorize connections. When setting up a new Hyperdrive, you have the option to provide the token’s ID and Secret, which will be encrypted and stored alongside the rest of your configuration. These will be presented as part of the WebSocket upgrade request to authorize the connection, allowing your database traffic through while preventing unwanted access.</p><p>This can be done within the request’s headers, and might look something like this:</p>
            <pre><code>let ws_url = format!("wss://{}", host);
let mut request = match ws_url.into_client_request() {
    Ok(req) =&gt; req,
    Err(err) =&gt; {
        error!(
            "Hostname {} could not be parsed into a valid request URL: {}", 
            host,
            err.to_string()
        );
        return ConnectionError::InternalError;
    }
};
request.headers_mut().insert(
    "CF-Access-Client-Id",
    http::header::HeaderValue::from_str(&amp;client_id).unwrap(),
);
request.headers_mut().insert(
    "CF-Access-Client-Secret",
    http::header::HeaderValue::from_str(&amp;client_secret).unwrap(),
);
</code></pre>
            
    <div>
      <h2>Building for customer zero</h2>
      <a href="#building-for-customer-zero">
        
      </a>
    </div>
    <p>If you’ve been reading the blog for a long time, some of this might sound a bit familiar.  This isn’t the first time that we’ve <a href="https://blog.cloudflare.com/cloudflare-tunnel-for-postgres/"><u>sent Postgres traffic across a tunnel</u></a>, it’s something most of us do from our laptops regularly.  This works very well for interactive use cases with low traffic volume and a high tolerance for latency, but historically most of our products have not been able to employ the same approach.</p><p>Cloudflare operates <a href="https://www.cloudflare.com/network/"><u>many data centers</u></a> around the world, and most services run in every one of those data centers. There are some tasks, however, that make the most sense to run in a more centralized fashion. These include tasks such as managing control plane operations, or storing configuration state.  Nearly every Cloudflare product houses its control plane information in <a href="https://blog.cloudflare.com/performance-isolation-in-a-multi-tenant-database-environment/"><u>Postgres clusters</u></a> run centrally in a handful of our data centers, and we use a variety of approaches for accessing that centralized data from elsewhere in our network. For example, many services currently use a push-based model to publish updates to <a href="https://blog.cloudflare.com/moving-quicksilver-into-production/"><u>Quicksilver</u></a>, and work through the complexities implied by such a model. This has been a recurring challenge for any team looking to build a new product.</p><p>Hyperdrive’s entire reason for being is to make it easy to access such central databases from our global network. When we began exploring Tunnel integrations as a feature, many internal teams spoke up immediately and strongly suggested they’d be interested in using it themselves. This was an excellent opportunity for Cloudflare to scratch its own itch, while also getting a lot of traffic on a new feature before releasing it directly to the public. As always, being “customer zero” means that we get fast feedback, more reliability over time, stronger connections between teams, and an overall better suite of products. We jumped at the chance.</p><p>As we rolled out early versions of Tunnel integration, we worked closely with internal teams to get them access to it, and fixed any rough spots they encountered. We’re pleased to share that this first batch of teams have found great success building new or <a href="https://www.cloudflare.com/learning/cloud/how-to-refactor-applications/">refactored</a> products on Hyperdrive over Tunnels. For example: if you’ve already tried out <a href="https://blog.cloudflare.com/builder-day-2024-announcements/#continuous-integration-and-delivery"><u>Workers Builds</u></a>, or recently <a href="https://www.cloudflare.com/trust-hub/reporting-abuse/"><u>submitted an abuse report</u></a>, you’re among our first users!  At the time of this writing, we have several more internal teams working to onboard, and we on the Hyperdrive team are very excited to see all the different ways in which fast and simple connections from Workers to a centralized database can help Cloudflare just as much as they’ve been helping our external customers.</p>
    <div>
      <h2>Outro</h2>
      <a href="#outro">
        
      </a>
    </div>
    <p>Cloudflare is on a mission to make the Internet faster, safer, and more reliable. Hyperdrive was built to make connecting to centralized databases from the Workers runtime as quick and consistent as possible, and this latest development is designed to help all those who want to use Hyperdrive without directly exposing resources within their virtual private clouds (VPCs) on the public web.</p><p>To this end, we chose to build a solution around our suite of industry-leading <a href="https://developers.cloudflare.com/cloudflare-one/"><u>Zero Trust</u></a> tools, and were delighted to find how simple it was to implement in our runtime given the power and extensibility of the Rust <code>trait</code> system. </p><p>Without waiting for the ink to dry, multiple teams within Cloudflare have adopted this new feature to quickly and easily solve what have historically been complex challenges, and are happily operating it in production today.</p><p>And now, if you haven't already, try <a href="https://developers.cloudflare.com/hyperdrive/configuration/connect-to-private-database/"><u>setting up Hyperdrive across a Tunnel</u></a>, and let us know what you think in the <a href="https://discord.com/channels/595317990191398933/1150557986239021106"><u>Hyperdrive Discord channel</u></a>!</p> ]]></content:encoded>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Deep Dive]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Hyperdrive]]></category>
            <category><![CDATA[Postgres]]></category>
            <category><![CDATA[SQL]]></category>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[WebSockets]]></category>
            <guid isPermaLink="false">5GK429XQHhFzVyKSXGZ2R6</guid>
            <dc:creator>Andrew Repp</dc:creator>
            <dc:creator>Emilio Assunção</dc:creator>
            <dc:creator>Abhishek Chanda</dc:creator>
        </item>
        <item>
            <title><![CDATA[Supporting Postgres Named Prepared Statements in Hyperdrive]]></title>
            <link>https://blog.cloudflare.com/postgres-named-prepared-statements-supported-hyperdrive/</link>
            <pubDate>Fri, 28 Jun 2024 13:00:09 GMT</pubDate>
            <description><![CDATA[ Hyperdrive (Cloudflare’s globally distributed SQL connection pooler and cache) recently added support for Postgres protocol-level named prepared statements across pooled connections. We dive deep on what it took to add this feature ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Hyperdrive (Cloudflare’s globally distributed SQL connection pooler and cache) recently added support for Postgres protocol-level named prepared statements across pooled connections. Named prepared statements allow Postgres to cache query execution plans, providing potentially substantial performance improvements. Further, many popular drivers in the ecosystem use these by default, meaning that not having them is a bit of a footgun for developers. We are very excited that Hyperdrive’s users will now have access to better performance and a more seamless development experience, without needing to make any significant changes to their applications!</p><p>While we're not the first connection pooler to add this support (<a href="https://www.pgbouncer.org/">PgBouncer</a> got to it in October 2023 in <a href="https://github.com/pgbouncer/pgbouncer/releases/tag/pgbouncer_1_21_0">version 1.21</a>, for example), there were some unique challenges in how we implemented it. To that end, we wanted to do a deep dive on what it took for us to deliver this.</p>
    <div>
      <h3>Hyper-what?</h3>
      <a href="#hyper-what">
        
      </a>
    </div>
    <p>One of the classic problems of building on the web is that your users are everywhere, but your database tends to be in one spot.  Combine that with pesky limitations like network routing, or the speed of light, and you can often run into situations where your users feel the pain of having your database so far away. This can look like slower queries, slower startup times, and connection exhaustion as everything takes longer to accomplish.</p><p><a href="/hyperdrive-making-regional-databases-feel-distributed">Hyperdrive</a> is designed to make the centralized databases you already have feel like they’re global. We use our <a href="https://www.cloudflare.com/network/">global network</a> to get faster routes to your database, keep connection pools primed, and cache your most frequently run queries as close to users as possible.</p>
    <div>
      <h3>Postgres Message Protocol</h3>
      <a href="#postgres-message-protocol">
        
      </a>
    </div>
    <p>To understand exactly what the challenge with prepared statements is, it's first necessary to dig in a bit to the <a href="https://www.postgresql.org/docs/current/protocol-flow.html">Postgres Message Protocol</a>. Specifically, we are going to take a look at the protocol for an “extended” query, which uses different message types and is a bit more complex than a “simple” query, but which is more powerful and thus more widely used.</p><p>A query using Hyperdrive might be coded something like this, but a lot goes on under the hood in order for Postgres to reliably return your response.</p>
            <pre><code>import postgres from "postgres";

// with Hyperdrive, we don't have to disable prepared statements anymore!
// const sql = postgres(env.HYPERDRIVE.connectionString, {prepare: false});

// make a connection, with the default postgres.js settings (prepare is set to true)
const sql = postgres(env.HYPERDRIVE.connectionString);

// This sends the query, and while it looks like a single action it contains several 
// messages implied within it
let [{ a, b, c, id }] = await sql`SELECT a, b, c, id FROM hyper_test WHERE id = ${target_id}`;</code></pre>
            <p>To prepare a statement, a Postgres client begins by sending a <i>Parse</i> message. This includes the query string, the number of parameters to be interpolated, and the statement's name. The name is a key piece of this puzzle. If it is empty, then Postgres uses a special "unnamed" prepared statement slot that gets overwritten on each new <i>Parse</i>. These are relatively easy to support, as most drivers will keep the entirety of a message sequence for unnamed statements together, and will not try to get too aggressive about reusing the prepared statement because it is overwritten so often.</p><p>If the statement has a name, however, then it is kept prepared for the remainder of the Postgres session (unless it is explicitly removed with <i>DEALLOCATE</i>). This is convenient because parsing a query string and preparing the statement costs bytes sent on the wire and CPU cycles to process, so reusing a statement is quite a nice optimization.</p><p>Once done with <i>Parse</i>, there are a few remaining steps to (the simplest form of) an extended query:</p><ul><li><p>A <i>Bind</i> message, which provides the specific values to be passed for the parameters in the statement (if any).</p></li><li><p>An <i>Execute</i> message, which tells the Postgres server to actually perform the data retrieval and processing.</p></li><li><p>And finally a <i>Sync</i> message, which causes the server to close the implicit transaction, return results, and provides a synchronization point for error handling.</p></li></ul><p>While that is the core pattern for accomplishing an extended protocol query, there are many more complexities possible (named <i>Portal</i>, <i>ErrorResponse</i>, etc.).</p><p>We will briefly mention one other complexity we often encounter in this protocol, which is <i>Describe</i> messages. Many drivers leverage Postgres’ built-in types to help with deserialization of the results into structs or classes. This is accomplished by sending a <i>Parse-Describe-Flush/Sync</i> sequence, which will send a statement to be prepared, and will expect back information about the types and data the query will return. This complicates bookkeeping around named prepared statements, as now there are two separate queries, with two separate kinds of responses, that must be kept track of. We won’t go into much depth on the tradeoffs of an additional round-trip in exchange for advanced information about the results’ format, but suffice it to say that it must be handled explicitly in order for the overall system to gracefully support prepared statements.</p><p>So the basic query from our code above looks like this from a message perspective:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5ztflSTbecT9o4QU3YLDW3/4ee2003396e5b15a15dd2cb63cdd2711/unnamed-4.png" />
            
            </figure><p>A <a href="https://www.postgresql.org/docs/current/protocol-flow.html#PROTOCOL-FLOW-EXT-QUERY">more complete description</a> and the <a href="https://www.postgresql.org/docs/current/protocol-message-formats.html">full structure of each message type</a> are well described in the Postgres documentation.</p><p>So, what's so hard about that?</p>
    <div>
      <h3>Buffering Messages</h3>
      <a href="#buffering-messages">
        
      </a>
    </div>
    <p>The first challenge that Hyperdrive must solve (that many other connection poolers don't have) is that it's also a cache.</p><p>The happiest path for a query on Hyperdrive never travels far, and we are quite proud of the low latency of our cache hits. However, this presents a particular challenge in the case of an extended protocol query. A <i>Parse</i> by itself is insufficient as a cache key, both because the parameter values in the <i>Bind</i> messages can alter the expected results, and because it might be followed up with either a <i>Describe</i> or an <i>Execute</i> message which will invoke drastically different responses.</p><p>So Hyperdrive cannot simply pass each message to the origin database, as we must buffer them in a message log until we have enough information to reliably distinguish between cache keys. It turns out that receiving a <i>Sync</i> is quite a natural point at which to check whether you have enough information to serve a response. For most scenarios, we buffer until we receive a <i>Sync</i>, and then (assuming the scenario is cacheable) we determine whether we can serve the response from cache or we need to take a connection to the origin database.</p>
    <div>
      <h3>Taking a Connection From the Pool</h3>
      <a href="#taking-a-connection-from-the-pool">
        
      </a>
    </div>
    <p>Assuming we aren't serving a response from cache, for whatever reason, we'll need to take an origin connection from our pool. One of the key advantages any connection pooler offers is in allowing many client connections to share few database connections, so minimizing how often and for how long these connections are held is crucial to making Hyperdrive performant.</p><p>To this end, <a href="https://developers.cloudflare.com/hyperdrive/configuration/how-hyperdrive-works/#connection-pooling">Hyperdrive operates</a> in what is traditionally called “transaction mode”. This means that a connection taken from the pool for any given transaction is returned once that transaction concludes. This is in contrast to what is often called “session mode”, where once a connection is taken from the pool it is held by the client until the client disconnects.</p><p>For Hyperdrive, allowing any client to take any database connection is vital. This is because if we "pin" a client to a given database connection then we have one fewer available for every other possible client. You can run yourself out of database connections very quickly once you start down that path, especially when your clients are many small Workers spread around the world.</p><p>The challenge prepared statements present to this scenario is that they exist at the "session" scope, which is to say, at the scope of one connection. If a client prepares a statement on connection A, but tries to reuse it and gets assigned connection B, Postgres will naturally throw an error claiming the statement doesn't exist in the given session. No results will be returned, the client is unhappy, and all that's left is to retry with a <i>Parse</i> message included. This causes extra round-trips between client and server, defeating the whole purpose of what is meant to be an optimization.</p><p>One of the goals of a connection pooler is to be as transparent to the client and server as possible. There are limitations, as Postgres will let you do some powerful things to session state that cannot be reasonably shared across arbitrary client connections, but to the extent possible the endpoints should not have to know or care about any multiplexing happening between them.</p><p>This means that when a client sends a <i>Parse</i> message on its connection, it should expect that the statement will be available for reuse when it wants to send a <i>Bind-Execute-Sync</i> sequence later on. It also means that the server should not get <i>Bind</i> messages for statements that only exist on some other session. Maintaining this illusion is the crux of providing support for this feature.</p>
    <div>
      <h3>Putting it all together</h3>
      <a href="#putting-it-all-together">
        
      </a>
    </div>
    <p>So, what does the solution look like? If a client sends <i>Parse-Bind-Execute-Sync</i> with a named prepared statement, then later sends <i>Bind-Execute-Sync</i> to reuse it, how can we make sure that everything happens as expected? The solution, it turns out, needs just a few built-in Rust data structures for efficiently capturing what we need (a <a href="https://doc.rust-lang.org/std/collections/struct.HashMap.html">HashMap</a>, some <a href="https://docs.rs/lru/latest/lru/struct.LruCache.html"><i>LruCaches</i></a> and a <a href="https://doc.rust-lang.org/std/collections/struct.VecDeque.html">VecDeque</a>), and some straightforward business logic to keep track of when to intervene in the messages being passed back and forth.</p><p>Whenever a named <i>Parse</i> comes in, we store it in an in-memory <i>HashMap</i> on the server that handles message processing for that client’s connection. This persists until the client is disconnected. This means that whenever we see anything referencing the statement, we can go retrieve the complete message defining it. We'll come back to this in a moment.</p><p>Once we've buffered all the messages we can and gotten to the point where it's time to return results (let's say because the client sent a <i>Sync</i>), we need to start applying some logic. For the sake of brevity we're going to omit talking through error handling here, as it does add some significant complexity but is somewhat out of scope for this discussion.</p><p>There are two main questions that determine how we should proceed:</p><ol><li><p>Does our message sequence include a <i>Parse</i>, or are we trying to reuse a pre-existing statement?</p></li><li><p>Do we have a cache hit or are we serving from the origin database?</p></li></ol><p>This gives us four scenarios to consider:</p><ol><li><p><i>Parse</i> with cache hit</p></li><li><p><i>Parse</i> with cache miss</p></li><li><p>Reuse with cache hit</p></li><li><p>Reuse with cache miss</p></li></ol><p>A <i>Parse</i> with a cache hit is the easiest path to address, as we don't need to do anything special. We use the messages sent as a cache key, and serve the results back to the client. We will still keep the <i>Parse</i> in our <i>HashMap</i> in case we want it later (#2 below), but otherwise we're good to go.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/33CSY4vk0u6lYLkKBRW7XH/291ea17f55196c35caee8d29d0f733a6/unnamed--1--4.png" />
            
            </figure><p>A <i>Parse</i> with a cache miss is a bit more complicated, as now we need to send these messages to the origin server. We take a connection at random from our pool and do so, passing the results back to the client. With that, we've begun to make changes to session state such that all our database connections are no longer identical to each other. To keep track of what we've done to muddy up our state, we keep a <i>LruCache</i> on each connection of which statements it already has prepared. In the case where we need to evict from such a cache, we will also <i>DEALLOCATE</i> the statement on the connection to keep things tracked correctly.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3NFnGuHT6uAmr2dMRdp41m/a95422a38aa44e7720cbdcca7bd73513/unnamed--2--2.png" />
            
            </figure><p>Reuse with a cache hit is yet more tricky, but still straightforward enough. In the example below, we are sent a <i>Bind</i> with the same parameters twice (#1 and #9). We must identify that we received a <i>Bind</i> without a preceding <i>Parse</i>, we must go retrieve that <i>Parse</i> (#10), and we must use the information from it to build our cache key. Once all that is accomplished, we can serve our results from cache, needing only to trim out the <i>ParseComplete</i> within the cached results before returning them to the client.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ksUndQnXpbjzu6Ggm3veo/9fa185c8c8a26829a1c4894efd24ccaa/unnamed--3--2.png" />
            
            </figure><p>Reuse with a cache miss is the hardest scenario, as it may require us to lie in both directions. In the example below, we cache results for one set of parameters (#8), but are sent a <i>Bind</i> with different parameters (#9). As in the cache hit scenario, we must identify that we were not sent a <i>Parse</i> as part of the current message sequence, retrieve it from our <i>HashMap</i> (#10), and build our cache key to GET from cache and confirm the miss (#11). Once we take a connection from the pool, though, we then need to check if it already has the statement we want prepared. If not, we must take our saved <i>Parse</i> and prepend it to our message log to be sent along to the origin database (#13). Thus, what the server receives looks like a perfectly valid <i>Parse-Bind-Execute-Sync</i> sequence. This is where our <i>VecDeque</i> (mentioned above) comes in, as converting our message log to that structure allowed us to very ergonomically make such changes without needing to rebuild the whole byte sequence. Once we receive the response from the server, all that's needed is to trim out the initial <i>ParseComplete</i> response from the server, as a well-made client would likely be very confused receiving such a response to a <i>Parse</i> it didn't send. With that message trimmed out, however, the client is in the position of getting exactly what it asked for, and both sides of the conversation are happy.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/lWQHcpash88r4sjmT3Thq/993c69dfe03c4a8bb0c4e472df2b4a7a/unnamed--4--1.png" />
            
            </figure>
    <div>
      <h3>Dénouement</h3>
      <a href="#denouement">
        
      </a>
    </div>
    <p>Now that we've got a working solution, where all parties are functioning well, let's review! Our solution lets us share database connections across arbitrary clients with no "pinning", no custom handling on either client or server, and supports reuse of prepared statements to reduce CPU load on re-parsing queries and reduce network traffic on re-sending <i>Parse</i> messages. Engineering always involves tradeoffs, so the cost of this is that we will sometimes still need to sneak in a <i>Parse</i> because a client got assigned a different connection on reuse, and in those scenarios there is a small amount of additional memory overhead because the same statement is prepared on multiple connections.</p><p>And now, if you haven't already, go give <a href="https://developers.cloudflare.com/hyperdrive/">Hyperdrive</a> a spin, and let us know what you think in the <a href="https://discord.com/channels/595317990191398933/1150557986239021106">Hyperdrive Discord channel</a>!</p> ]]></content:encoded>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Hyperdrive]]></category>
            <category><![CDATA[Postgres]]></category>
            <category><![CDATA[SQL]]></category>
            <category><![CDATA[Message Protocol]]></category>
            <category><![CDATA[Prepared Statements]]></category>
            <guid isPermaLink="false">65jmeFCIBZN3YdPbpEwmY1</guid>
            <dc:creator>Andrew Repp</dc:creator>
        </item>
        <item>
            <title><![CDATA[Making state easy with D1 GA, Hyperdrive, Queues and Workers Analytics Engine updates]]></title>
            <link>https://blog.cloudflare.com/making-full-stack-easier-d1-ga-hyperdrive-queues/</link>
            <pubDate>Mon, 01 Apr 2024 13:00:06 GMT</pubDate>
            <description><![CDATA[ We kick off the week with announcements that help developers build stateful applications on top of Cloudflare, including making D1, our SQL database and Hyperdrive, our database accelerating service, generally available ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4BKrpfqvHnl6yaHdXXsCoc/70280206c43fc4ecfa026968440f52f0/image4-31.png" />
            
            </figure>
    <div>
      <h3>Making full-stack easier</h3>
      <a href="#making-full-stack-easier">
        
      </a>
    </div>
    <p>Today might be April Fools, and while we like to have fun as much as anyone else, we like to use this day for serious announcements. In fact, as of today, there are over 2 million developers building on top of Cloudflare’s platform — that’s no joke!</p><p>To kick off this Developer Week, we’re flipping the big “production ready” switch on three products: <a href="https://developers.cloudflare.com/d1/">D1, our serverless SQL database</a>; <a href="https://developers.cloudflare.com/hyperdrive/">Hyperdrive</a>, which makes your <i>existing</i> databases feel like they’re distributed (and faster!); and <a href="https://developers.cloudflare.com/analytics/analytics-engine/">Workers Analytics Engine</a>, our time-series database.</p><p>We’ve been on a mission to allow developers to bring their entire stack to Cloudflare for some time, but what might an application built on Cloudflare look like?</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5D3F21rYXhLv0bI6FID3Kc/4b0ca6dfc52e168a852599345e111a02/image6-11.png" />
            
            </figure><p>The diagram itself shouldn’t look too different from the tools you’re already familiar with: you want a <a href="https://developers.cloudflare.com/d1/">database</a> for your core user data. <a href="https://www.cloudflare.com/learning/cloud/what-is-object-storage/">Object storage</a> for assets and user content. Maybe a <a href="https://developers.cloudflare.com/queues/">queue</a> for background tasks, like email or upload processing. A <a href="https://developers.cloudflare.com/kv/">fast key-value store</a> for runtime configuration. Maybe even a <a href="https://developers.cloudflare.com/analytics/analytics-engine/">time-series database</a> for aggregating user events and/or performance data. And that’s before we get to <a href="https://developers.cloudflare.com/workers-ai/">AI</a>, which is increasingly becoming a core part of many applications in search, recommendation and/or image analysis tasks (at the very least!).</p><p>Yet, without having to think about it, this architecture runs on Region: Earth, which means it’s scalable, reliable and fast — all out of the box.</p>
    <div>
      <h3>D1 GA: Production Ready</h3>
      <a href="#d1-ga-production-ready">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6FBwcKFjSHTCL2LcJRtNCo/46c6a403e7f8c743ac8a4dff252d85e4/image2-35.png" />
            
            </figure><p>Your core database is one of the most critical pieces of your infrastructure. It needs to be ultra-reliable. It can’t lose data. It needs to scale. And so we’ve been heads down over the last year getting the pieces into place to make sure D1 is production-ready, and we’re extremely excited to say that D1 — our <a href="https://www.cloudflare.com/developer-platform/products/d1/">global, serverless SQL database</a> — is now Generally Available.</p><p>The GA for D1 lands some of the most asked-for features, including:</p><ul><li><p>Support for 10GB databases — and 50,000 databases per account;</p></li><li><p>New data export capabilities; and</p></li><li><p>Enhanced query debugging (we call it “D1 Insights”) — that allows you to understand what queries are consuming the most time, cost, or that are just plain inefficient…  </p></li></ul><p>… to empower developers to build production-ready applications with D1 to meet all their relational SQL needs. And importantly, in an era where the concept of a “<a href="https://www.cloudflare.com/plans/free/">free plan</a>” or “hobby plan” is seemingly at risk, we have no intention of removing the free tier for D1 or reducing the <i>25 billion row reads</i> included in the $5/mo Workers Paid plan:</p><table><colgroup><col></col><col></col><col></col><col></col></colgroup><tbody><tr><td><p><span>Plan</span></p></td><td><p><span>Rows Read</span></p></td><td><p><span>Rows Written</span></p></td><td><p><span>Storage</span></p></td></tr><tr><td><p><span>Workers</span><span> </span><span>Paid</span></p></td><td><p><span>First 25 billion / month included</span><span><br /></span><span><br /></span><span>+ $0.001 / million rows</span></p></td><td><p><span>First 50 million / month included</span><span><br /></span><span><br /></span><span>+ $1.00 / million rows</span></p></td><td><p><span>First 5 GB included</span></p><br /><p><span>+ $0.75 / GB-mo</span></p></td></tr><tr><td><p><span>Workers Free</span></p></td><td><p><span>5 million / day</span></p></td><td><p><span>100,000 / day</span><span><span>	</span></span></p></td><td><p><span>5 GB (total)</span></p></td></tr></tbody></table><p><i>For those who’ve been following D1 since the start: this is the same pricing we announced at </i><a href="/d1-open-beta-is-here"><i>open beta</i></a></p><p>But things don’t just stop at GA: we have some major new features lined up for D1, including global read replication, even larger databases, more <a href="https://developers.cloudflare.com/d1/reference/time-travel/">Time Travel</a> capabilities that will allow you to branch your database, and new APIs for dynamically querying and/or creating new databases-on-the-fly from within a Worker.</p><p>D1’s read replication will automatically deploy read replicas as needed to get data closer to your users: and without you having to spin up, manage scaling, or run into consistency (replication lag) issues. Here’s a sneak preview of what D1’s upcoming Replication API looks like:</p>
            <pre><code>export default {
  async fetch(request: Request, env: Env) {
    const {pathname} = new URL(request.url);
    let resp = null;
    let session = env.DB.withSession(token); // An optional commit token or mode

    // Handle requests within the session.
    if (pathname === "/api/orders/list") {
      // This statement is a read query, so it will work against any
      // replica that has a commit equal or later than `token`.
      const { results } = await session.prepare("SELECT * FROM Orders");
      resp = Response.json(results);
    } else if (pathname === "/api/orders/add") {
      order = await request.json();

      // This statement is a write query, so D1 will send the query to
      // the primary, which always has the latest commit token.
      await session.prepare("INSERT INTO Orders VALUES (?, ?, ?)")
        .bind(order.orderName, order.customer, order.value);
        .run();

      // In order for the application to be correct, this SELECT
      // statement must see the results of the INSERT statement above.
      //
      // D1's new Session API keeps track of commit tokens for queries
      // within the session and will ensure that we won't execute this
      // query until whatever replica we're using has seen the results
      // of the INSERT.
      const { results } = await session.prepare("SELECT COUNT(*) FROM Orders")
        .run();
      resp = Response.json(results);
    }

    // Set the token so we can continue the session in another request.
    resp.headers.set("x-d1-token", session.latestCommitToken);
    return resp;
  }
}</code></pre>
            <p>Importantly, we will give developers the ability to maintain session-based consistency, so that users still see their own changes reflected, whilst still benefiting from the performance and latency gains that replication can bring.</p><p>You can learn more about how D1’s read replication works under the hood <a href="/building-d1-a-global-database/">in our deep-dive post</a>, and if you want to start building on D1 today, <a href="https://developers.cloudflare.com/d1/">head to our developer docs</a> to create your first database.</p>
    <div>
      <h3>Hyperdrive: GA</h3>
      <a href="#hyperdrive-ga">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/47WBGHvqFpRkza2ldA5RBi/7f7f47055e1f4f066e213b88e9e98737/image1-37.png" />
            
            </figure><p>We launched Hyperdrive into open beta <a href="/hyperdrive-making-regional-databases-feel-distributed">last September during Birthday Week</a>, and it’s now Generally Available — or in other words, battle-tested and production-ready.</p><p>If you’re not caught up on what Hyperdrive is, it’s designed to make the centralized databases you already have feel like they’re global. We use our <a href="https://www.cloudflare.com/network/">global network</a> to get faster routes to your database, keep connection pools primed, and cache your most frequently run queries as close to users as possible.</p><p>Importantly, Hyperdrive supports the most popular drivers and ORM (Object Relational Mapper) libraries out of the box, so you don’t have to re-learn or re-write your queries:</p>
            <pre><code>// Use the popular 'pg' driver? Easy. Hyperdrive just exposes a connection string
// to your Worker.
const client = new Client({ connectionString: env.HYPERDRIVE.connectionString });
await client.connect();

// Prefer using an ORM like Drizzle? Use it with Hyperdrive too.
// https://orm.drizzle.team/docs/get-started-postgresql#node-postgres
const client = new Client({ connectionString: env.HYPERDRIVE.connectionString });
await client.connect();
const db = drizzle(client);</code></pre>
            <p>But the work on Hyperdrive doesn’t stop just because it’s now “GA”. Over the next few months, we’ll be bringing support for the <i>other</i> most widely deployed database engine there is: MySQL. We’ll also be bringing support for connecting to databases inside private networks (including cloud VPC networks) via <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/">Cloudflare Tunnel</a> and <a href="https://developers.cloudflare.com/magic-wan/">Magic WAN</a> On top of that, we plan to bring more configurability around invalidation and caching strategies, so that you can make more fine-grained decisions around performance vs. data freshness.</p><p>As we thought about how we wanted to price Hyperdrive, we realized that it just didn’t seem right to charge for it. After all, the performance benefits from Hyperdrive are not only significant, but essential to connecting to traditional database engines. Without Hyperdrive, paying the latency overhead of 6+ round-trips to connect &amp; query your database per request just isn’t right.</p><p>And so we’re happy to announce that <b>for any developer on a Workers Paid plan, Hyperdrive is free</b>. That includes both query caching and connection pooling, as well as the ability to create multiple Hyperdrives — to separate different applications, prod vs. staging, or to provide different configurations (cached vs. uncached, for example).</p><table><colgroup><col></col><col></col><col></col></colgroup><tbody><tr><td><p><span>Plan</span></p></td><td><p><span>Price per query</span></p></td><td><p><span>Connection Pooling</span></p></td></tr><tr><td><p><span>Workers</span><span> </span><span>Paid</span></p></td><td><p><span>$0 </span></p></td><td><p><span>$0</span></p></td></tr></tbody></table><p>To get started with Hyperdrive, <a href="https://developers.cloudflare.com/hyperdrive/">head over to the docs</a> to learn how to connect your existing database and start querying it from your Workers.</p>
    <div>
      <h3>Queues: Pull From Anywhere</h3>
      <a href="#queues-pull-from-anywhere">
        
      </a>
    </div>
    <p>The task queue is an increasingly critical part of building a modern, full-stack application, and this is what we had in mind when we <a href="/cloudflare-queues-open-beta">originally announced</a> the open beta of <a href="https://developers.cloudflare.com/queues/">Queues</a>. We’ve since been working on several major Queues features, and we’re launching two of them this week: pull-based consumers and new message delivery controls.</p><p>Any HTTP-speaking client <a href="https://developers.cloudflare.com/queues/reference/pull-consumers/">can now pull messages from a queue</a>: call the new /pull endpoint on a queue to request a batch of messages, and call the /ack endpoint to acknowledge each message (or batch of messages) as you successfully process them:</p>
            <pre><code>// Pull and acknowledge messages from a Queue using any HTTP client
$  curl "https://api.cloudflare.com/client/v4/accounts/${CF_ACCOUNT_ID}/queues/${QUEUE_ID}/messages/pull" -X POST --data '{"visibilityTimeout":10000,"batchSize":100}}' \
     -H "Authorization: Bearer ${QUEUES_TOKEN}" \
     -H "Content-Type:application/json"

// Ack the messages you processed successfully; mark others to be retried.
$ curl "https://api.cloudflare.com/client/v4/accounts/${CF_ACCOUNT_ID}/queues/${QUEUE_ID}/messages/ack" -X POST --data '{"acks":["lease-id-1", "lease-id-2"],"retries":["lease-id-100"]}' \
     -H "Authorization: Bearer ${QUEUES_TOKEN}" \
     -H "Content-Type:application/json"</code></pre>
            <p>A pull-based consumer can run anywhere, allowing you to run queue consumers alongside your existing legacy cloud infrastructure. Teams inside Cloudflare adopted this early on, with one use-case focused on writing device telemetry to a queue from our <a href="https://www.cloudflare.com/network/">310+ data centers</a> and consuming within some of our back-of-house infrastructure running on Kubernetes. Importantly, our globally distributed queue infrastructure means that messages are retained within the queue until the consumer is ready to process them.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2UUkrE3bqqIdQiemV49Hal/496c2d539b366a794d58479c99b1c9ec/image5-19.png" />
            
            </figure><p>Queues also <a href="https://developers.cloudflare.com/queues/reference/batching-retries/#delay-messages">now supports delaying messages</a>, both when sending to a queue, as well as when marking a message for retry. This can be useful to queue (pun intended) tasks for the future, as well apply a backoff mechanism if an upstream API or infrastructure has rate limits that require you to pace how quickly you are processing messages.</p>
            <pre><code>// Apply a delay to a message when sending it
await env.YOUR_QUEUE.send(msg, { delaySeconds: 3600 })

// Delay a message (or a batch of messages) when marking it for retry
for (const msg of batch.messages) {
	msg.retry({delaySeconds: 300})
} </code></pre>
            <p>We’ll also be bringing substantially increased per-queue throughput over the coming months on the path to getting Queues to GA. It’s important to us that Queues is <i>extremely</i> reliable: lost or dropped messages means that a user doesn’t get their order confirmation email, that password reset notification, and/or their uploads processed — each of those are user-impacting and hard to recover from.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1RhxWjKGRmoJtgQ4toybvY/57469d1ee721096a3c2b7551bbd277a4/image3-35.png" />
            
            </figure>
    <div>
      <h3>Workers Analytics Engine is GA</h3>
      <a href="#workers-analytics-engine-is-ga">
        
      </a>
    </div>
    <p><a href="https://developers.cloudflare.com/analytics/analytics-engine/">Workers Analytics Engine</a> provides unlimited-cardinality analytics at scale, via a built-in API to write data points from Workers, and a SQL API to query that data.</p><p>Workers Analytics Engine is backed by the same ClickHouse-based system we have depended on for years at Cloudflare. We use it ourselves to observe the health of our own services, to capture product usage data for billing, and to answer questions about specific customers’ usage patterns. At least one data point is written to this system on nearly every request to Cloudflare’s network. Workers Analytics Engine lets you build your own custom analytics using this same infrastructure, while we manage the hard parts for you.</p><p>Since <a href="/workers-analytics-engine">launching in beta</a>, developers have started depending on Workers Analytics Engine for these same use cases and more, from large enterprises to open-source projects like <a href="https://github.com/benvinegar/counterscale/">Counterscale</a>. Workers Analytics Engine has been operating at production scale with mission-critical workloads for years — but we hadn’t shared anything about pricing, until today.</p><p>We are keeping Workers Analytics Engine pricing simple, and based on two metrics:</p><ol><li><p><b>Data points written</b> — every time you call <a href="https://developers.cloudflare.com/analytics/analytics-engine/get-started/#3-write-data-from-your-worker">writeDataPoint()</a> in a Worker, this counts as one data point written. Every data point costs the same amount — unlike other platforms, there is no penalty for adding dimensions or cardinality, and no need to predict what the size and cost of a compressed data point might be.</p></li><li><p><b>Read queries</b> — every time you post to the Workers Analytics Engine <a href="https://developers.cloudflare.com/analytics/analytics-engine/sql-api/">SQL API</a>, this counts as one read query. Every query costs the same amount — unlike other platforms, there is no penalty for query complexity, and no need to reason about the number of rows of data that will be read by each query.</p></li></ol><p>Both the Workers Free and Workers Paid plans will include an allocation of data points written and read queries, with pricing for additional usage as follows:</p><table><colgroup><col></col><col></col><col></col></colgroup><tbody><tr><td><p><span>Plan</span></p></td><td><p><span>Data points written</span></p></td><td><p><span>Read queries</span></p></td></tr><tr><td><p><span>Workers</span><span> </span><span>Paid</span></p></td><td><p><span>10 million included per month</span></p><p><span><br /></span><span>+$0.25 per additional million</span></p></td><td><p><span>1 million included per month</span></p><p><span><br /></span><span>+$1.00 per additional million</span></p></td></tr><tr><td><p><span>Workers Free</span></p></td><td><p><span>100,000 included per day</span></p></td><td><p><span>10,000 included per day</span></p></td></tr></tbody></table><p>With this pricing, you can answer, “how much will Workers Analytics Engine cost me?” by counting the number of times you call a function in your Worker, and how many times you make a request to a HTTP API endpoint. Napkin math, rather than spreadsheet math.</p><p>This pricing will be made available to everyone in coming months. Between now and then, Workers Analytics Engine continues to be available at no cost. You can <a href="https://developers.cloudflare.com/analytics/analytics-engine/get-started/#limits">start writing data points from your Worker today</a> — it takes just a few minutes and less than 10 lines of code to start capturing data. We’d love to hear what you think.</p>
    <div>
      <h3>The week is just getting started</h3>
      <a href="#the-week-is-just-getting-started">
        
      </a>
    </div>
    <p>Tune in to what we have in store for you tomorrow on our second day of Developer Week. If you have questions or want to show off something cool you already built, please join our developer <a href="https://discord.cloudflare.com/"><i>Discord</i></a>.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Hyperdrive]]></category>
            <category><![CDATA[Queues]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">5O8kPvrc2dyHIwmf2c0shv</guid>
            <dc:creator>Rita Kozlov</dc:creator>
            <dc:creator>Matt Silverlock</dc:creator>
        </item>
    </channel>
</rss>