
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Mon, 13 Apr 2026 19:45:30 GMT</lastBuildDate>
        <item>
            <title><![CDATA[D1: open beta is here]]></title>
            <link>https://blog.cloudflare.com/d1-open-beta-is-here/</link>
            <pubDate>Thu, 28 Sep 2023 13:00:14 GMT</pubDate>
            <description><![CDATA[ D1 is now in open beta, and the theme is “scale”: with higher per-database storage limits and the ability to create more databases, we’re unlocking the ability for developers to build production-scale applications on D1 ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4sioTCCEWQ0hiLg5ZSCD46/c53658e14bc379bea56cd0f3fed1d42b/image1-37.png" />
            
            </figure><p><b>D1 is now in open beta</b>, and the theme is “scale”: with higher per-database storage limits <i>and</i> the ability to create more databases, we’re unlocking the ability for developers to build production-scale applications on D1. Any developers with an existing paid Workers plan don’t need to lift a finger to benefit: we’ve retroactively applied this to all existing D1 databases.</p><p>If you missed the <a href="/d1-turning-it-up-to-11/">last D1 update</a> back during Developer Week, the <a href="https://developers.cloudflare.com/d1/changelog/">multitude of updates in the changelog</a>, or are just new to D1 in general: read on.</p>
    <div>
      <h3>Remind me: D1? Databases?</h3>
      <a href="#remind-me-d1-databases">
        
      </a>
    </div>
    <p>D1 our <a href="https://www.cloudflare.com/developer-platform/products/d1/">native serverless database</a>, which we launched into alpha in November last year: the queryable database complement to <a href="https://developers.cloudflare.com/kv/">Workers KV</a>, <a href="https://developers.cloudflare.com/durable-objects/">Durable Objects</a> and <a href="https://developers.cloudflare.com/r2/">R2</a>.</p><p>When we set out to build D1, we knew a few things for certain: it needed to be fast, it needed to be incredibly easy to create a database, and it needed to be SQL-based.</p><p>That last one was critical: so that developers could a) avoid learning another custom query language and b) make it easier for existing query buildings, ORM (object relational mapper) libraries and other tools to connect to D1 with minimal effort. From this, we’ve seen a huge number of projects build support in for D1: from support for D1 in the <a href="https://github.com/drizzle-team/drizzle-orm/blob/main/examples/cloudflare-d1/README.md">Drizzle ORM</a> and <a href="https://developers.cloudflare.com/d1/platform/community-projects/#d1-adapter-for-kysely-orm">Kysely</a>, to the <a href="https://t4stack.com/">T4 App</a>, a full-stack toolkit that uses D1 as its database.</p><p>We also knew that D1 couldn’t be the only way to query a database from Workers: for teams with existing databases and thousands of lines of SQL or existing ORM code, migrating across to D1 isn’t going to be an afternoon’s work. For those teams, we built <a href="/hyperdrive-making-regional-databases-feel-distributed/">Hyperdrive</a>, allowing you to connect to your existing databases and make them feel global. We think this gives teams flexibility: combine D1 and Workers for globally distributed apps, and use Hyperdrive for querying the databases you have in legacy clouds and just can’t get rid of overnight.</p>
    <div>
      <h3>Larger databases, and more of them</h3>
      <a href="#larger-databases-and-more-of-them">
        
      </a>
    </div>
    <p>This has been the biggest ask from the thousands of D1 users throughout the alpha: not just more databases, but also <i>bigger</i> databases.</p><p><b>Developers on the Workers paid plan will now be able to grow each database up to 2GB and create 50,000 databases (up from 500MB and 10). Yes, you read that right: 50,000 databases per account. This unlocks a whole raft of database-per-user use-cases and enables true isolation between customers, something that traditional relational database deployments can’t.</b></p><p>We’ll be continuing to work on unlocking even larger databases over the coming weeks and months: developers using the D1 beta will see automatic increases to these limits published on <a href="https://developers.cloudflare.com/d1/changelog/">D1’s public changelog</a>.</p><p>One of the biggest impediments to double-digit-gigabyte databases is performance: we want to ensure that a database can load in and be ready <i>really</i> quickly — cold starts of seconds (or more) just aren’t acceptable. A 10GB or 20GB database that takes 15 seconds before it can answer a query ends up being pretty frustrating to use.</p><p>Users on the <a href="https://www.cloudflare.com/plans/free/">Workers free plan</a> will keep the ten 500MB databases (<a href="https://developers.cloudflare.com/d1/changelog/#per-database-limit-now-500-mb">changelog</a>) forever: we want to give more developers the room to experiment with D1 and Workers before jumping in.</p>
    <div>
      <h3>Time Travel is here</h3>
      <a href="#time-travel-is-here">
        
      </a>
    </div>
    <p><a href="https://developers.cloudflare.com/d1/learning/time-travel/">Time Travel</a> allows you to roll your database back to a specific point in time: specifically, any minute in the last 30 days. And it’s enabled by default for every D1 database, doesn’t cost any more, and doesn’t count against your storage limit.</p><p>For those who have been keeping tabs: we originally announced Time Travel earlier this year, and made it <a href="https://developers.cloudflare.com/d1/changelog/#time-travel">available to all D1 users in July</a>. At its core, it’s deceptively simple: Time Travel introduces the concept of a “bookmark” to D1. A bookmark represents the state of a database at a specific point in time, and is effectively an append-only log. Time Travel can take a timestamp and turn it into a bookmark, or a bookmark directly: allowing you to restore back to that point. Even better: restoring doesn’t prevent you from going back further.</p><p>We think Time Travel works best with an example, so let’s make a change to a database: one with an Order table that stores every order made against our e-commerce store:</p>
            <pre><code># To illustrate: we have 89,185 unique addresses in our order database. 
➜  wrangler d1 execute northwind --command "SELECT count(distinct ShipAddress) FROM [Order]" 
┌──────────┐
│ count(*) │
├──────────┤
│ 89185    │
└──────────┘</code></pre>
            <p>OK, great. Now what if we wanted to make a change to a specific set of orders: an address change or freight company change?</p>
            <pre><code># I think we might be forgetting something here...
➜  wrangler d1 execute northwind --command "UPDATE [Order] SET ShipAddress = 'Av. Veracruz 38, Roma Nte., Cuauhtémoc, 06700 Ciudad de México, CDMX, Mexico' </code></pre>
            <p>Wait: we’ve made a mistake that many, many folks have before: we forgot the WHERE clause on our UPDATE query. Instead of updating a specific order Id, we’ve instead updated the ShipAddress for every order in our table.</p>
            <pre><code># Every order is now going to a wine bar in Mexico City. 
➜  wrangler d1 execute northwind --command "SELECT count(distinct ShipAddress) FROM [Order]" 
┌──────────┐
│ count(*) │
├──────────┤
│ 1        │
└──────────┘</code></pre>
            <p>Panic sets in. Did we remember to make a backup before we did this? How long ago was it? Did we turn on point-in-time recovery? It seemed potentially expensive at the time…</p><p>It’s OK. We’re using D1. We can Time Travel. It’s on by default: let’s fix this and travel back a few minutes.</p>
            <pre><code># Let's go back in time.
➜  wrangler d1 time-travel restore northwind --timestamp="2023-09-23T14:20:00Z"

🚧 Restoring database northwind from bookmark 0000000b-00000002-00004ca7-9f3dba64bda132e1c1706a4b9d44c3c9
✔ OK to proceed (y/N) … yes

⚡️ Time travel in progress...
✅ Database dash-db restored back to bookmark 00000000-00000004-00004ca7-97a8857d35583887de16219c766c0785
↩️ To undo this operation, you can restore to the previous bookmark: 00000013-ffffffff-00004ca7-90b029f26ab5bd88843c55c87b26f497</code></pre>
            <p>Let's check if it worked:</p>
            <pre><code># Phew. We're good. 
➜  wrangler d1 execute northwind --command "SELECT count(distinct ShipAddress) FROM [Order]" 
┌──────────┐
│ count(*) │
├──────────┤
│ 89185    │
└──────────┘</code></pre>
            <p>We think that Time Travel becomes even more powerful when you have many smaller databases, too: the downsides of any restore operation is reduced further and scoped to a single user or tenant.</p><p>This is also just the beginning for Time Travel: we’re working to support not just only restoring a database, but also the ability to fork from and overwrite existing databases. If you can fork a database with a single command and/or test migrations and schema changes against real data, you can de-risk a lot of the traditional challenges that working with databases has historically implied.</p>
    <div>
      <h3>Row-based pricing</h3>
      <a href="#row-based-pricing">
        
      </a>
    </div>
    <p><a href="/d1-turning-it-up-to-11/#not-going-to-burn-a-hole-in-your-wallet">Back in May</a> we announced pricing for D1, to a lot of positive feedback around how much we’d included in our Free and Paid plans. In August, we published a new row-based model, replacing the prior byte-units, that makes it easier to predict and quantify your usage. Specifically, we moved to rows as it’s easier to reason about: if you’re writing a row, it doesn’t matter if it’s 1KB or 1MB. If your read query uses an indexed column to filter on, you’ll see not only performance benefits, but cost savings too.</p><p>Here’s D1’s pricing — almost everything has stayed the same, with the added benefit of charging based on rows:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4053N3dvxuEp46TQG6xec9/74244f620374666d3b8fcbcf5d0016bb/Screenshot-2023-09-29-at-09.33.51.png" />
            
            </figure><p>D1’s pricing — you can find more details in <a href="https://developers.cloudflare.com/d1/platform/pricing/">D1’s public documentation</a>.</p><p>As before, D1 does not charge you for “database hours”, the number of databases, or point-in-time recovery (<a href="https://developers.cloudflare.com/d1/learning/time-travel/">Time Travel</a>) — just query D1 and pay for your reads, writes, and storage — that’s it.</p><p>We believe this makes D1 not only far more cost-efficient, but also makes it easier to manage multiple databases to isolate customer data or prod vs. staging: we don’t care <i>which</i> database you query. Manage your data how you like, separate your customer data, and avoid having to fall for the trap of “Billing Based Architecture”, where you build solely around how you’re charged, even if it’s not intuitive or what makes sense for your team.</p><p>To make it easier to both see how much a given query charges <i>and</i> when to <a href="https://developers.cloudflare.com/d1/learning/using-indexes/">optimize your queries with indexes</a>, D1 also returns the number of rows a query read or wrote (or both) so that you can understand how it’s costing you in both cents and speed.</p><p>For example, the following query filters over orders based on date:</p>
            <pre><code>SELECT * FROM [Order] WHERE ShippedDate &gt; '2016-01-22'" 

[
  {
    "results": [],
    "success": true,
    "meta": {
      "duration": 5.032,
      "size_after": 33067008,
      "rows_read": 16818,
      "rows_written": 0
    }
  }
]</code></pre>
            <p>The unindexed query above scans 16,800 rows. Even if we don’t optimize it, D1 includes 25 billion queries per month for free, meaning we could make this query 1.4 million times for a whole month before having to worry about extra costs.</p><p>But we can do better with an index:</p>
            <pre><code>CREATE INDEX IF NOT EXISTS idx_orders_date ON [Order](ShippedDate)</code></pre>
            <p>With the index created, let’s see how many rows our query needs to read now:</p>
            <pre><code>SELECT * FROM [Order] WHERE ShippedDate &gt; '2016-01-22'" 

[
  {
    "results": [],
    "success": true,
    "meta": {
      "duration": 3.793,
             "size_after": 33067008,
      "rows_read": 417,
      "rows_written": 0
    }
  }
]</code></pre>
            <p>The same query with an index on the ShippedDate column reads just 417 rows: not only it is faster (duration is in milliseconds!), but it costs us less: we could run this query 59 million times per month before we’d have to pay any more than what the $5 Workers plan gives us.</p><p>D1 also <a href="https://developers.cloudflare.com/d1/platform/metrics-analytics/#metrics">exposes row counts</a> via both the Cloudflare dashboard and our GraphQL analytics API: so not only can you look at this per-query when you’re tuning performance, but also break down query patterns across all of your databases.</p>
    <div>
      <h3>D1 for Platforms</h3>
      <a href="#d1-for-platforms">
        
      </a>
    </div>
    <p>Throughout D1’s alpha period, we’ve both heard from and worked with teams who are excited about D1’s ability to scale out horizontally: the ability to deploy a database-per-customer (or user!) in order to keep data closer to where teams access it <i>and</i> more strongly isolate that data from their other users.</p><p>Teams building the next big thing on <a href="https://developers.cloudflare.com/cloudflare-for-platforms/workers-for-platforms/">Workers for Platforms</a> — think of it as “Functions as a Service, as a Service” — can use D1 to deploy a <b>database per user</b> — keeping customer data strongly separated from each other.</p><p>For example, and as one of the early adopters of D1, <a href="https://twitter.com/roninapp">RONIN</a> is building an edge-first content &amp; data platform backed by a dedicated D1 database per customer, which allows customers to place data closer to users and provides each customer isolation from the queries of others.</p><p>Instead of spinning up and managing countless traditional database instances, RONIN uses D1 for Platforms to offer automatic infinite scalability at the edge. This allows RONIN to focus on providing an intuitive editing experience for your content.</p><p>When it comes to enabling “D1 for Platforms”, we’ve thought about this in a few ways from the very beginning:</p><ul><li><p><b>Support for more than 100,000+ databases for Workers for Platforms users — there’s no limit, but if we said “unlimited” you might not believe us — on top of the 50,000 databases per account that D1 already enables.</b></p></li><li><p>D1’s pricing - you don’t pay per-database or for “idle databases”. If you have a range of users, from thousands of QPS down to 1-2 every 10 minutes — you aren’t paying more for “database hours” on the less trafficked databases, or having to plan around spiky workloads across your user-base.</p></li><li><p>The ability to programmatically configure more databases via <a href="https://developers.cloudflare.com/api/operations/cloudflare-d1-create-database">D1’s HTTP API</a> <i>and</i> <a href="https://developers.cloudflare.com/api/operations/worker-script-patch-settings">attach them to your Worker</a> without re-deploying. There’s no “provisioning” delay, either: you create the database, and it’s immediately ready to query by you or your users.</p></li><li><p>Detailed <a href="https://developers.cloudflare.com/d1/platform/metrics-analytics/">per-database analytics</a>, so you can understand which databases are being used and how they’re being queried via D1’s GraphQL analytics API.</p></li></ul><p>If you’re building the next big platform on top of Workers &amp; want to use D1 at scale — whether you’re part of the <a href="https://www.cloudflare.com/lp/workers-launchpad/">Workers Launchpad program</a> or not — reach out.</p>
    <div>
      <h3>What’s next for D1?</h3>
      <a href="#whats-next-for-d1">
        
      </a>
    </div>
    <p><b>We’re setting a clear goal: we want to make D1 “generally available” (GA) for production use-cases by early next year</b> <b>(Q1 2024)</b>. Although you can already use D1 without a waitlist or approval process, we understand that the GA label is an important one for many when it comes to a database (and as do we).</p><p>Between now and GA, we’re working on some really key parts of the D1 vision, with a continued focus on reliability and performance.</p><p>One of the biggest remaining pieces of that vision is global read replication, which we <a href="/d1-turning-it-up-to-11/">wrote about earlier this year</a>. Importantly, replication will be free, won’t multiply your storage consumption, and will still enable session consistency (read-your-writes). Part of D1’s mission is about getting data closer to where users are, and we’re excited to land it.</p><p>We’re also working to expand <a href="https://developers.cloudflare.com/d1/learning/time-travel/">Time Travel</a>, D1’s built-in point-in-time recovery capabilities, so that you can branch and/or clone a database from a specific point-in-time on the fly.</p><p>We’ll also <b>be progressively opening up our limits around per-database storage, unlocking more storage per account, and the number of databases you can create over the rest of this year</b>, so keep an eye on the D1 <a href="https://developers.cloudflare.com/d1/changelog/">changelog</a> (or your inbox).</p><p>In the meantime, if you haven’t yet used D1, you can <a href="https://developers.cloudflare.com/d1/get-started/">get started</a> right now, visit D1’s <a href="https://developers.cloudflare.com/d1/">developer documentation</a> to spark some ideas, or <a href="https://discord.cloudflare.com/">join the #d1-beta channel</a> on our Developer Discord to talk to other D1 developers and our product-engineering team.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Database]]></category>
            <category><![CDATA[D1]]></category>
            <guid isPermaLink="false">5I0knbF5YIn2PbvvOTa1q2</guid>
            <dc:creator>Matt Silverlock</dc:creator>
            <dc:creator>Ben Yule</dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing support for WASI on Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/announcing-wasi-on-workers/</link>
            <pubDate>Thu, 07 Jul 2022 16:09:43 GMT</pubDate>
            <description><![CDATA[ Today, we are announcing experimental support for WASI (the WebAssembly System Interface) on Cloudflare Workers and support within wrangler2 to make it a joy to work with ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/38dw1f6v1MK3BDS3rYr7mE/c986b7fcb97d2ae3b88688cdce832f36/image1-OG-2.png" />
            
            </figure><p>Today, we are announcing experimental support for WASI (the WebAssembly System Interface) on Cloudflare Workers and support within wrangler2 to make it a joy to work with. We continue to be incredibly excited about the entire WebAssembly ecosystem and are eager to adopt the standards as they are developed.</p>
    <div>
      <h3>A Quick Primer on WebAssembly</h3>
      <a href="#a-quick-primer-on-webassembly">
        
      </a>
    </div>
    <p>So what is WASI anyway? To understand WASI, and why we’re excited about it, it’s worth a quick recap of WebAssembly, and the ecosystem around it.</p><p>WebAssembly promised us a future in which code written in compiled languages could be compiled to a common binary format and run in a secure sandbox, at near native speeds. While WebAssembly was designed with the browser in mind, the model rapidly extended to server-side platforms such as Cloudflare Workers (which <a href="/webassembly-on-cloudflare-workers/">has supported WebAssembly</a> since 2017).</p><p>WebAssembly was originally designed to run <i>alongside</i> Javascript, and requires developers to interface directly with Javascript in order to access the world outside the sandbox. To put it another way, WebAssembly does not provide any standard interface for I/O tasks such as interacting with files, accessing the network, or reading the system clock. This means if you want to respond to an event from the outside world, it's up to the developer to handle that event in JavaScript, and directly call functions exported from the WebAssembly module. Similarly, if you want to perform I/O from within WebAssembly, you need to implement that logic in Javascript and import it into the WebAssembly module.</p><p>Custom toolchains such as Emscripten or libraries such as wasm-bindgen have emerged to make this easier, but they are language specific and add a tremendous amount of complexity and bloat. We've even built our own library, workers-rs, using wasm-bindgen that attempts to make writing applications in Rust feel native within a Worker – but this has proven not only difficult to maintain, but requires developers to write code that is Workers specific, and is not portable outside the Workers ecosystem.</p><p>We need more.</p>
    <div>
      <h3>The WebAssembly System Interface (WASI)</h3>
      <a href="#the-webassembly-system-interface-wasi">
        
      </a>
    </div>
    <p>WASI aims to provide a standard interface that any language compiling to WebAssembly can target. You can read the original post by Lin Clark <a href="https://hacks.mozilla.org/2019/03/standardizing-wasi-a-webassembly-system-interface/">here</a>, which gives an excellent introduction – code cartoons and all. In a nutshell, Lin describes WebAssembly as an <i>assembly language</i> for a 'conceptual machine', whereas WASI is a <i>systems interface</i> for a ‘conceptual operating system.’</p><p>This standardization of the system interface has paved the way for existing toolchains to cross-compile existing codebases to the wasm32-wasi target. A tremendous amount of progress has already been made, specifically within Clang/LLVM via the <a href="https://github.com/WebAssembly/wasi-sdk">wasi-sdk</a> and <a href="https://doc.rust-lang.org/stable/nightly-rustc/rustc_target/spec/wasm32_wasi/index.html">Rust</a> toolchains. These toolchains leverage a version of <a href="https://github.com/WebAssembly/wasi-libc">Libc</a>, which provides POSIX standard API calls, that is built on top of WASI 'system calls.' There are even basic implementations in more fringe toolchains such as <a href="https://tinygo.org/docs/guides/webassembly/">TinyGo</a> and <a href="https://swiftwasm.org/">SwiftWasm</a>.</p><p>Practically speaking, this means that you can now write applications that not only interoperate with any WebAssembly runtime implementing the standard, but also any POSIX compliant system! This means the exact same 'Hello World!' that runs on your local Linux/Mac/Windows WSL machine.</p>
    <div>
      <h3>Show me the code</h3>
      <a href="#show-me-the-code">
        
      </a>
    </div>
    <p>WASI sounds great, but does it actually make my life easier? You tell us. Let’s run through an example of how this would work in practice.</p><p>First, let’s generate a basic Rust “Hello, world!” application, compile, and run it.</p>
            <pre><code>$ cargo new hello_world
$ cd ./hello_world
$ cargo build --release
   Compiling hello_world v0.1.0 (/Users/benyule/hello_world)
    Finished release [optimized] target(s) in 0.28s
$ ./target/release/hello_world
Hello, world!</code></pre>
            <p>It doesn’t get much simpler than this. You’ll notice we only define a main() function followed by a println to stdout.</p>
            <pre><code>fn main() {
    println!("Hello, world!");
}</code></pre>
            <p>Now, let’s take the exact same program and compile against the wasm32-wasi target, and run it in an ‘off the shelf’ wasm runtime such as <a href="https://wasmtime.dev/">Wasmtime</a>.</p>
            <pre><code>$ cargo build --target wasm32-wasi --release
$ wasmtime target/wasm32-wasi/release/hello_world.wasm

Hello, world!</code></pre>
            <p>Neat! The same code compiles and runs in multiple POSIX environments.</p><p>Finally, let’s take the binary we just generated for Wasmtime, but instead publish it to Workers using Wrangler2.</p>
            <pre><code>$ npx wrangler@wasm dev target/wasm32-wasi/release/hello_world.wasm
$ curl http://localhost:8787/

Hello, world!</code></pre>
            <p>Unsurprisingly, it works! The same code is compatible in multiple POSIX environments and the same binary is compatible across multiple WASM runtimes.</p>
    <div>
      <h3>Running your CLI apps in the cloud</h3>
      <a href="#running-your-cli-apps-in-the-cloud">
        
      </a>
    </div>
    <p>The attentive reader may notice that we played a small trick with the HTTP request made via cURL. In this example, we actually stream stdin and stdout to/from the Worker using the HTTP request and response body respectively. This pattern enables some really interesting use cases, specifically, programs designed to run on the command line can be deployed as 'services' to the cloud.</p><p>‘Hexyl’ is an example that works completely out of the box. Here, we ‘cat’ a binary file on our local machine and ‘pipe’ the output to curl, which will then POST that output to our service and stream the result back. Following the steps we used to compile our 'Hello World!', we can compile hexyl.</p>
            <pre><code>$ git clone git@github.com:sharkdp/hexyl.git
$ cd ./hexyl
$ cargo build --target wasm32-wasi --release</code></pre>
            <p>And without further modification we were able to take a real-world program and create something we can now run or deploy. Again, let's tell wrangler2 to preview hexyl, but this time give it some input.</p>
            <pre><code>$ npx wrangler@wasm dev target/wasm32-wasi/release/hexyl.wasm
$ echo "Hello, world\!" | curl -X POST --data-binary @- http://localhost:8787

┌────────┬─────────────────────────┬─────────────────────────┬────────┬────────┐
│00000000│ 48 65 6c 6c 6f 20 77 6f ┊ 72 6c 64 21 0a          │Hello wo┊rld!_   │
└────────┴─────────────────────────┴─────────────────────────┴────────┴────────┘
</code></pre>
            <p>Give it a try yourself by hitting <a href="https://hexly.examples.workers.dev">https://hexyl.examples.workers.dev</a>.</p>
            <pre><code>echo "Hello world\!" | curl https://hexyl.examples.workers.dev/ -X POST --data-binary @- --output -</code></pre>
            <p>A more useful example, but requires a bit more work, would be to deploy a utility such as swc (swc.rs), to the cloud and use it as an on demand JavaScript/TypeScript transpilation service. Here, we have a few extra steps to ensure that the compiled output is as small as possible, but it otherwise runs out-of-the-box. Those steps are detailed in <a href="https://github.com/zebp/wasi-example-swc">https://github.com/zebp/wasi-example-swc</a>, but for now let’s gloss over that and interact with the hosted example.</p>
            <pre><code>$ echo "const x = (x, y) =&gt; x * y;" | curl -X POST --data-binary @- https://swc-wasi.examples.workers.dev/ --output -

var x=function(a,b){return a*b}</code></pre>
            <p>Finally, we can also do the same with C/C++, but requires a little more lifting to get our Makefile right. Here we show an example of compiling zstd and uploading it as a streaming compression service.</p><p><a href="https://github.com/zebp/wasi-example-zstd">https://github.com/zebp/wasi-example-zstd</a></p>
            <pre><code>$ echo "Hello world\!" | curl https://zstd.examples.workers.dev/ -s -X POST --data-binary @- | file -</code></pre>
            
    <div>
      <h3>What if I want to use WASI from within a JavaScript Worker?</h3>
      <a href="#what-if-i-want-to-use-wasi-from-within-a-javascript-worker">
        
      </a>
    </div>
    <p>Wrangler can make it really easy to deploy code without having to worry about the Workers ecosystem, but in some cases you may actually want to invoke your WASI based WASM module from Javascript. This can be achieved with the following simple boilerplate. An updated README will be kept at <a href="https://github.com/cloudflare/workers-wasi">https://github.com/cloudflare/workers-wasi</a>.</p>
            <pre><code>import { WASI } from "@cloudflare/workers-wasi";
import demoWasm from "./demo.wasm";

export default {
  async fetch(request, _env, ctx) {
    // Creates a TransformStream we can use to pipe our stdout to our response body.
    const stdout = new TransformStream();
    const wasi = new WASI({
      args: [],
      stdin: request.body,
      stdout: stdout.writable,
    });

    // Instantiate our WASM with our demo module and our configured WASI import.
    const instance = new WebAssembly.Instance(demoWasm, {
      wasi_snapshot_preview1: wasi.wasiImport,
    });

    // Keep our worker alive until the WASM has finished executing.
    ctx.waitUntil(wasi.start(instance));

    // Finally, let's reply with the WASM's output.
    return new Response(stdout.readable);
  },
};</code></pre>
            <p>Now with our JavaScript boilerplate and wasm, we can easily deploy our worker with Wrangler’s WASM feature.</p>
            <pre><code>$ npx wrangler publish
Total Upload: 473.89 KiB / gzip: 163.79 KiB
Uploaded wasi-javascript (2.75 sec)
Published wasi-javascript (0.30 sec)
  wasi-javascript.zeb.workers.dev</code></pre>
            
    <div>
      <h2>Back to the future</h2>
      <a href="#back-to-the-future">
        
      </a>
    </div>
    <p>For those of you who have been around for the better part of the past couple of decades, you may notice this looks very similar to RFC3875, better known as CGI (The Common Gateway Interface). While our example here certainly does not conform to the specification, you can imagine how this can be extended to turn the stdin of a basic 'command line' application into a full-blown http handler.</p><p>We are thrilled to learn where developers take this from here. Share what you build with us on <a href="https://discord.com/invite/cloudflaredev">Discord</a> or <a href="https://twitter.com/CloudflareDev">Twitter</a>!</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[WebAssembly]]></category>
            <category><![CDATA[WASM]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">4zx8IwlpdBe2PLRTAA7xje</guid>
            <dc:creator>Ben Yule</dc:creator>
            <dc:creator>Zebulon Piasecki</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Relational Database Connectors]]></title>
            <link>https://blog.cloudflare.com/relational-database-connectors/</link>
            <pubDate>Mon, 15 Nov 2021 13:59:29 GMT</pubDate>
            <description><![CDATA[ Customers can connect to a Postgres or MySQL database directly from their Workers using a Cloudflare Tunnel today. In the future, you can use Database Connectors to achieve this natively using a standardized Socket API. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>At Cloudflare, we’re building the best compute platform in the world. We want to make it easy, seamless, and obvious to build your applications with us. But simply making the best compute platform is not enough — at the heart of your applications are the data they interact with.</p><p>Cloudflare has multiple data storage solutions available today: <a href="/introducing-workers-kv/">Workers KV</a>, <a href="/introducing-r2-object-storage/">R2</a>, and <a href="/introducing-workers-durable-objects/">Durable Objects</a>. All three follow Cloudflare’s design goals for Workers: global by default, infinitely scalable, and delightful for developers to use. We’ve partnered with third-party storage solutions like Fauna, MongoDB and Prisma, who have built data platforms that align beautifully with our design goals and written tutorials for databases that already support HTTP connections.</p><p>The one area that’s been sorely missed: relational databases. Cloudflare itself runs on relational databases, and we’re not alone. In April, we asked <a href="https://workers.cloudflare.com/node">which Node libraries</a> you wanted us to support, and <b>four of the top five requests</b> were related to databases. For this Full Stack Week, we asked ourselves: how could we support relational databases in a way that aligned with our design goals?</p><p>Today, we’re taking a first step towards that world by announcing support for relational databases, including Postgres and MySQL from Workers.</p><p>Connecting to a database is no simple task — if it were as easy as passing a connection string to a database driver, we would have already done it. We’ve had to overcome several hurdles to reach this point, and have several more still to conquer.  </p><p>Our goal with this announcement is to work with you, our developers, to solve the unique pain points that come from accessing databases inside Workers. If you’d like to work with us, fill out <a href="https://www.cloudflare.com/database-connectors-early-access">this form</a> or join us <a href="https://discord.gg/rH4SsffFcc">on Discord</a> — this is just the beginning. If you’d just like to grab the code and play around, use this <a href="https://developers.cloudflare.com/workers/tutorials/query-postgres-from-workers-using-database-connectors">example</a> to get started connecting to your own database, or check out our demo.</p>
    <div>
      <h3>Why are Database Connectors so hard to build?</h3>
      <a href="#why-are-database-connectors-so-hard-to-build">
        
      </a>
    </div>
    <p>Serverless database connections are challenging to support for several reasons.</p><p>Databases are needy — they often require TCP connections, since they assume long-lived connections between an application server and the database. The Workers runtime doesn’t currently support TCP connections, so we’ve only been able to support HTTP-based databases or proxies.</p><p>Like a relationship, establishing a connection isn’t quite enough. Developers use client libraries for databases to make submitting queries and managing the responses easy. Since the Workers runtime is not entirely Node.js compatible, we need to either roll our own database library or find one that does not use unsupported built-in libraries.</p><p>Finally, databases are sensitive. It often takes external libraries to manage shared connections between an application server and a database, since these connections tend to be expensive to establish.</p>
    <div>
      <h3>Moving past these challenges</h3>
      <a href="#moving-past-these-challenges">
        
      </a>
    </div>
    <p>Our approach today gives us the foundation to address each of these challenges in creative ways going forward.</p><p>First, we’re leveraging <a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/install-and-setup">cloudflared</a> to create a secure tunnel between Cloudflare and a private network within your existing infrastructure. Cloudflared already supports proxying HTTP to TCP over WebSockets — Our challenge is providing interfaces that look like the socket interfaces existing libraries expect, while rewiring the implementations to redirect reads and writes to our websocket. This method is fast, safe, and secure; but limiting in that we lack control of where to direct the final connections. This is a problem we will solve soon, but until then our approach is essential to gathering latency and performance data to see where else we need to improve.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/vVs0KefQQxbNEt3VsNHHj/41225487a689b82c04c9ef7beb7d8ae2/unnamed-10.png" />
            
            </figure><p>Next, we’ve created a shim-layer that adapts the socket API from a popular runtime to connect directly to databases using a WebSocket. This allows us to bundle code as-is, without forking or otherwise making significant changes to the database library. As part of this announcement, we’ve published a <a href="https://developers.cloudflare.com/workers/tutorials/query-postgres-from-workers-using-database-connectors">tutorial</a> on how to connect to and query a Postgres database from your Workers, using existing Cloudflare technology and a driver from the growing community at Deno. We’re excited to work with the upstream maintainers, on expanding support.</p><p>Finally, we’re most excited for how this approach will let us begin to manage connection pooling and connection establishment overhead. While our current tech demo requires setting up the Cloudflare Tunnel on your own infrastructure, we’re looking for customers who’d like to pilot a model where Cloudflare hosts the tunnel for you.</p>
    <div>
      <h3>Where we’re going</h3>
      <a href="#where-were-going">
        
      </a>
    </div>
    <p>We’re just getting started. Our goal with today’s announcement is to find customers who are looking to build new applications or migrate existing applications to Workers while working with data that’s stored in a relational database.</p><p>Just as Cloudflare started by providing security, performance, and reliability for customer’s websites, we’re excited about a future where Cloudflare manages database connections, handles replication of data across cloud providers and provides low-latency access to data globally.</p><p>First, we’re looking to add <a href="/introducing-socket-workers/">support for TCP into the runtime natively</a>. With native support for TCP we’ll not only have better support for databases, but expand the Workers runtime to work with data infrastructure more broadly.</p><p>Our position in the network layer of the stack makes providing performance, security benefits and extremely reduced egress costs to global databases all possible realities. To do so, we’ll repurpose the HTTP to TCP proxy service that we’ve currently built and run it for developers as a connection pooling service, managing connections to their databases on their behalf.</p><p>Finally, our network makes caching data and making it accessible globally at low latency possible. Once we have connections back to your data, making it globally accessible in Cloudflare’s network will unlock fundamentally new architectures for distributed data.</p>
    <div>
      <h3>Take our connectors for a spin</h3>
      <a href="#take-our-connectors-for-a-spin">
        
      </a>
    </div>
    <p>Want to check things out? There are three main steps to getting up-and-running:</p><ol><li><p>Deploying cloudflared within your infrastructure.</p></li><li><p>Deploying a database that connects to cloudflared.</p></li><li><p>Deploying a Worker with the database driver that submits queries.</p></li></ol><p>The Postgres tutorial is available <a href="https://developers.cloudflare.com/workers/tutorials/query-postgres-from-workers-using-database-connectors">here</a>.</p><p>When you’re all done, it’ll look a little something like this:</p>
            <pre><code>import { Client } from './driver/postgres/postgres'

export default {
  async fetch(request: Request, env, ctx: ExecutionContext) {
    try {
      const client = new Client({
        user: 'postgres',
        database: 'postgres',
        hostname: 'https://db.example.com',
        password: '',
        port: 5432,
      })
      await client.connect()
      const result = await client.queryArray('SELECT * FROM users WHERE uuid=1;')
      ctx.waitUntil(client.end())
      return new Response(JSON.stringify(result.rows[0]))
    } catch (e) {
      return new Response((e as Error).message)
    }
  },
}</code></pre>
            <p>Hit any snags? Fill out <a href="https://www.cloudflare.com/database-connectors-early-access">this form</a>, join <a href="https://discord.gg/rH4SsffFcc">our Discord</a> or shoot us an <a href="#">email</a> and let’s chat!</p> ]]></content:encoded>
            <category><![CDATA[Full Stack Week]]></category>
            <category><![CDATA[Postgres]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">CinJ7mVjQHrKXIimcCbNR</guid>
            <dc:creator>Kabir Sikand</dc:creator>
            <dc:creator>Greg McKeon</dc:creator>
            <dc:creator>Ben Yule</dc:creator>
        </item>
        <item>
            <title><![CDATA[How we built Instant Logs]]></title>
            <link>https://blog.cloudflare.com/how-we-built-instant-logs/</link>
            <pubDate>Tue, 14 Sep 2021 12:59:03 GMT</pubDate>
            <description><![CDATA[ In this blog post, we’ll show you how we built a new system that can give you access to your Cloudflare logs in real time, with just a single click. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5xxrYk5PzHSV412hdlqi9k/a303ed70dd110581d25e60a6fda01f38/image3-10.png" />
            
            </figure><p>As a developer, you may be all too familiar with the stress of responding to a major service outage, becoming aware of an ongoing security breach, or simply dealing with the frustration of setting up a new service for the first time. When confronted with these situations, you want a real-time view into the events flowing through your network, so you can receive, process, and act on information as quickly as possible.</p><p>If you have a UNIX mindset, you’ll be familiar with tailing web service logs and searching for patterns using grep. With distributed systems like Cloudflare’s edge network, this task becomes much more complex because you’ll either need to log in to thousands of servers, or ship all the logs to a single place.</p><p>This is why we built Instant Logs. Instant Logs removes all barriers to accessing your Cloudflare logs, giving you a complete platform to view your HTTP logs in real time, with just a single click, right from within Cloudflare’s dashboard. Powerful filters then let you drill into specific events or search for patterns, and act on them instantly.</p>
    <div>
      <h2>The Challenge</h2>
      <a href="#the-challenge">
        
      </a>
    </div>
    <p>Today, Cloudflare’s Logpush product already gives customers the ability to ship their logs to a third-party analytics or storage provider of their choosing. While this system is already exceptionally fast, delivering logs in about 15s on average, it is optimized for completeness and the utmost certainty that your data is reliably making it to its destination. It is the ideal solution for after things have settled down, and you want to perform a forensic deep dive or retrospective.</p><p>We originally aimed to extend this system to provide our real-time logging capabilities, but we soon realized the objectives were inherently at odds with each other. In order to get all of your data, to a single place, all the time, the laws of the universe require that latencies be introduced into the system. We needed a complementary solution, with its own unique set of objectives.</p><p>This ultimately boiled down to the following</p><ol><li><p>It has to be extremely fast, in human terms. This means average latencies between an event occurring at the edge and being received by the client should be <i>under three seconds</i>.</p></li><li><p>We wanted the system design to be simple, and communication to be as direct to the client as possible. This meant operating the dataplane entirely at the edge, eliminating unnecessary round trips to a core data center.</p></li><li><p>The pipeline needs to provide sensible results on properties of all sizes, ranging from a few requests per day to hundreds of thousands of requests per second.</p></li><li><p>The pipeline must support a broad set of user-definable filters that are applied before any sampling occurs, such that a user can target and receive exactly what they want.</p></li></ol>
    <div>
      <h2>Workers and Durable Objects</h2>
      <a href="#workers-and-durable-objects">
        
      </a>
    </div>
    <p>Our existing Logpush pipeline relies heavily on Kafka to provide sharding, buffering, and aggregation at a single, central location. While we’ve had excellent results using Kafka for these pipelines, the clusters are optimized to run only within our core data centers. Using Kafka would require extra hops to far away data centers, adding a latency penalty we were not willing to incur.</p><p>In order to keep the data plane running on the edge, we needed primitives that would allow us to perform some of the same key functions we needed out of <a href="/squeezing-the-firehose/">Kafka</a>. This is where Workers and the recently released Durable Objects come in. Workers provide an incredibly simple to use, highly elastic, edge-native, compute platform we can use to receive events, and perform transformations. Durable Objects, through their global uniqueness, allow us to coordinate messages streaming from thousands of servers and route them to a singular object. This is where aggregation and buffering are performed, before finally pushing to a client over a thin WebSocket. We get all of this, without ever having to leave the edge!</p><p>Let’s walk through what this looks like in practice.</p>
    <div>
      <h3>A Simple Start</h3>
      <a href="#a-simple-start">
        
      </a>
    </div>
    <p>Imagine a simple scenario in which we have a single web server which produces log messages, and a single client which wants to consume them. This can be implemented by creating a Durable Object, which we will refer to as a Durable Session, that serves as the point of coordination between the server and client. In this case, the client initiates a WebSocket connection with the Durable Object, and the server sends messages to the Durable Object over HTTP, which are then forwarded directly to the client.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5PuK0M59DUIxyde5IrIU78/ac2650b42242101e40872e0b503e7e85/image1-9.png" />
            
            </figure><p>This model is quite quick and introduces very little additional latency other than what would be required to send a payload directly from the web server to the client. This is thanks to the fact that Durable Objects are generally located at or near the data center where they are first requested. At least in human terms, it’s instant. Adding more servers to our model is also trivial. As the additional servers produce events, they will all be routed to the same Durable Object, which merges them into a single stream, and sends them to the client over the same WebSocket.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4TEHgyNF7oP8le25gvViNY/fce6b70f34f617de00673332929d6dd8/image5-7.png" />
            
            </figure><p>Durable Objects are inherently single threaded. As the number of servers in our simple example increases, the Durable Object will eventually saturate its CPU time and will eventually start to reject incoming requests. And even if it didn’t, as data volumes increase, we risk overwhelming a client’s ability to download and render log lines. We’ll handle this in a few different ways.</p>
    <div>
      <h3>Honing in on specific events</h3>
      <a href="#honing-in-on-specific-events">
        
      </a>
    </div>
    <p>Filtering is the most simple and obvious way to reduce data volume before it reaches the client. If we can filter out the noise, and stream only the events of interest, we can substantially reduce volume. Performing this transformation in the Durable Object itself will provide no relief from CPU saturation concerns. Instead, we can push this filtering out to an invoking Worker, which will run many filter operations in parallel, as it elastically scales to process all the incoming requests to the Durable Object. At this point, our architecture starts to look a lot like the MapReduce pattern!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7k0GQ5U5eqoppXsxYR15km/489e51991c8867fd0cc58236d7dc25e9/image6-2.png" />
            
            </figure>
    <div>
      <h3>Scaling up with shards</h3>
      <a href="#scaling-up-with-shards">
        
      </a>
    </div>
    <p>Ok, so filtering may be great in some situations, but it’s not going to save us under all scenarios. We still need a solution to help us coordinate between potentially thousands of servers that are sending events every single second. Durable Objects will come to the rescue, yet again. We can implement a sharding layer consisting of Durable Objects, we will call them Durable Shards, that effectively allow us to reduce the number of requests being sent to our primary object.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1pKCBGb5Wrs9tMLxhzbcKo/479394e5ea9256996cdf462d1c9d00e6/image4-10.png" />
            
            </figure><p>But how do we implement this layer if Durable Objects are globally unique? We first need to decide on a shard key, which is used to determine which Durable Object a given message should first be routed to. When the Worker processes a message, the key will be added to the name of the downstream Durable Object. Assuming our keys are well-balanced, this should effectively reduce the load on the primary Durable Object by approximately 1/N.</p>
    <div>
      <h3>Reaching the moon by sampling</h3>
      <a href="#reaching-the-moon-by-sampling">
        
      </a>
    </div>
    <p>But wait, there’s more to do. Going back to our original product requirements, “<i>The pipeline needs to provide sensible results on properties of all sizes, ranging from a few requests per day to hundreds of thousands of requests per second</i>.” With the system as designed so far, we have the technical headroom to process an almost arbitrary number of logs. However, we’ve done nothing to reduce the absolute volume of messages that need to be processed and sent to the client, and at high log volumes, clients would quickly be overwhelmed. To deliver the interactive, instant user experience customers expect, we need to roll up our sleeves one more time.</p><p>This is where our final trick, sampling, comes into play.</p><p>Up to this point, when our pipeline saturates, it still makes forward progress by dropping excess data as the Durable Object starts to refuse connections. However, this form of ‘uncontrolled shedding’ is dangerous because it causes us to lose information. When we drop data in this way, we can’t keep a record of the data we dropped, and we cannot infer things about the original shape of the traffic from the messages that we do receive. Instead, we implement a form of ‘controlled’ sampling, which still preserves the statistics, and information about the original traffic.</p><p>For Instant Logs, we implement a sampling technique called <a href="https://en.wikipedia.org/wiki/Reservoir_sampling">Reservoir Sampling</a>. Reservoir sampling is a form of dynamic sampling that has this amazing property of letting us pick a specific <i>k</i> number of items from a stream of unknown length <i>n</i>, with a single pass through the data. By buffering data in the reservoir, and flushing it on a short (sub second) time interval, we can output random samples to the client at the maximum data rate of our choosing. Sampling is implemented in both layers of Durable Objects.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4HCrDDvKaIjouZLIBfsqAA/cc071a45d07036cb64bb9e8470fb7083/image2-15.png" />
            
            </figure><p>Information about the original traffic shape is preserved by assigning a sample interval to each line, which is equivalent to the number of samples that were dropped for this given sample to make it through, or 1/probability. The actual number of requests can then be calculated by taking the sum of all sample intervals within a time window. This technique adds a slight amount of latency to the pipeline to account for buffering, but enables us to point an event source of nearly any size at the pipeline, and we can expect it will be handled in a sensible, controlled way.</p>
    <div>
      <h3>Putting it all together</h3>
      <a href="#putting-it-all-together">
        
      </a>
    </div>
    <div></div><p>What we are left with is a pipeline that sensibly handles wildly different volumes of traffic, from single digits to hundreds of thousands of requests a second. It allows the user to pinpoint an exact event in a sea of millions, or calculate summaries over every single one. It delivers insight within seconds, all without ever having to do more than click a button.</p><p>Best of all? Workers and Durable Objects handle this workload with aplomb and no tuning, and the available developer tooling allowed me to be productive from my first day writing code targeting the Workers ecosystem.</p>
    <div>
      <h3>How to get involved?</h3>
      <a href="#how-to-get-involved">
        
      </a>
    </div>
    <p>We’ll be starting our Beta for Instant Logs in a couple of weeks. <a href="https://www.cloudflare.com/instant-logs-beta/">Join the waitlist</a> to get notified about when you can get access!</p><p>If you want to be part of building the future of data at Cloudflare, we’re <a href="https://boards.greenhouse.io/cloudflare/jobs/2103918?gh_jid=2103918">hiring engineers</a> for our data team in Lisbon, London, Austin, and San Francisco!</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div></div> ]]></content:encoded>
            <category><![CDATA[Speed Week]]></category>
            <category><![CDATA[Logs]]></category>
            <category><![CDATA[Analytics]]></category>
            <category><![CDATA[Deep Dive]]></category>
            <guid isPermaLink="false">1u51DqLkLcMw96TGieNGp0</guid>
            <dc:creator>Ben Yule</dc:creator>
        </item>
    </channel>
</rss>