
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Mon, 13 Apr 2026 18:48:10 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Sequential consistency without borders: how D1 implements global read replication]]></title>
            <link>https://blog.cloudflare.com/d1-read-replication-beta/</link>
            <pubDate>Thu, 10 Apr 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ D1, Cloudflare’s managed SQL database, announces read replication beta. Here's a deep dive of the read replication implementation and how your queries can remain consistent across all regions. ]]></description>
            <content:encoded><![CDATA[ <p>Read replication of <a href="https://www.cloudflare.com/developer-platform/products/d1/">D1 databases</a> is in public beta!</p><p>D1 read replication makes read-only copies of your database available in multiple regions across Cloudflare’s network.  For busy, read-heavy applications like e-commerce websites, content management tools, and mobile apps:</p><ul><li><p>D1 read replication lowers average latency by routing user requests to read replicas in nearby regions.</p></li><li><p>D1 read replication increases overall throughput by offloading read queries to read replicas, allowing the primary database to handle more write queries.</p></li></ul><p>The main copy of your database is called the primary database and the read-only copies are called read replicas.  When you enable replication for a D1 database, the D1 service automatically creates and maintains read replicas of your primary database.  As your users make requests, D1 routes those requests to an appropriate copy of the database (either the primary or a replica) based on performance heuristics, the type of queries made in those requests, and the query consistency needs as expressed by your application.</p><p>All of this global replica creation and request routing is handled by Cloudflare at no additional cost.</p><p>To take advantage of read replication, your Worker needs to use the new D1 <a href="https://developers.cloudflare.com/d1/best-practices/read-replication/"><u>Sessions API</u></a>. Click the button below to run a Worker using D1 read replication with this <a href="https://github.com/cloudflare/templates/tree/main/d1-starter-sessions-api-template"><u>code example</u></a> to see for yourself!</p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/d1-starter-sessions-api-template"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p>
    <div>
      <h2>D1 Sessions API</h2>
      <a href="#d1-sessions-api">
        
      </a>
    </div>
    <p>D1’s read replication feature is built around the concept of database <i>sessions</i>.  A session encapsulates all the queries representing one logical session for your application. For example, a session might represent all requests coming from a particular web browser or all requests coming from a mobile app used by one of your users. If you use sessions, your queries will use the appropriate copy of the D1 database that makes the most sense for your request, be that the primary database or a nearby replica.</p><p>The sessions implementation ensures <a href="https://jepsen.io/consistency/models/sequential"><u>sequential consistency</u></a> for all queries in the session, no matter what copy of the database each query is routed to.  The sequential consistency model has important properties like "<a href="https://jepsen.io/consistency/models/read-your-writes"><u>read my own writes</u></a>" and "<a href="https://jepsen.io/consistency/models/writes-follow-reads"><u>writes follow reads</u></a>," as well as a total ordering of writes. The total ordering of writes means that every replica will see transactions committed in the same order, which is exactly the behavior we want in a transactional system.  Said another way, sequential consistency guarantees that the reads and writes are executed in the order in which you write them in your code.</p><p>Some examples of consistency implications in real-world applications:</p><ul><li><p>You are using an online store and just placed an order (write query), followed by a visit to the account page to list all your orders (read query handled by a replica). You want the newly placed order to be listed there as well.</p></li><li><p>You are using your bank’s web application and make a transfer to your electricity provider (write query), and then immediately navigate to the account balance page (read query handled by a replica) to check the latest balance of your account, including that last payment.</p></li></ul><p>Why do we need the Sessions API? Why can we not just query replicas directly?</p><p>Applications using D1 read replication need the Sessions API because D1 runs on Cloudflare’s global network and there’s no way to ensure that requests from the same client get routed to the same replica for every request. For example, the client may switch from WiFi to a mobile network in a way that changes how their requests are routed to Cloudflare. Or the data center that handled previous requests could be down because of an outage or maintenance.</p><p>D1’s read replication is asynchronous, so it’s possible that when you switch between replicas, the replica you switch to lags behind the replica you were using. This could mean that, for example, the new replica hasn’t learned of the writes you just completed.  We could no longer guarantee useful properties like “read your own writes”.  In fact, in the presence of shifty routing, the only consistency property we could guarantee is that what you read had been committed at some point in the past (<a href="https://jepsen.io/consistency/models/read-committed"><u>read committed</u></a> consistency), which isn’t very useful at all!</p><p>Since we can’t guarantee routing to the same replica, we flip the script and use the information we get from the Sessions API to make sure whatever replica we land on can handle the request in a sequentially-consistent manner.</p><p>Here’s what the Sessions API looks like in a Worker:</p>
            <pre><code>export default {
  async fetch(request: Request, env: Env) {
    // A. Create the session.
    // When we create a D1 session, we can continue where we left off from a previous    
    // session if we have that session's last bookmark or use a constraint.
    const bookmark = request.headers.get('x-d1-bookmark') ?? 'first-unconstrained'
    const session = env.DB.withSession(bookmark)

    // Use this session for all our Workers' routes.
    const response = await handleRequest(request, session)

    // B. Return the bookmark so we can continue the session in another request.
    response.headers.set('x-d1-bookmark', session.getBookmark())

    return response
  }
}

async function handleRequest(request: Request, session: D1DatabaseSession) {
  const { pathname } = new URL(request.url)

  if (request.method === "GET" &amp;&amp; pathname === '/api/orders') {
    // C. Session read query.
    const { results } = await session.prepare('SELECT * FROM Orders').all()
    return Response.json(results)

  } else if (request.method === "POST" &amp;&amp; pathname === '/api/orders') {
    const order = await request.json&lt;Order&gt;()

    // D. Session write query.
    // Since this is a write query, D1 will transparently forward it to the primary.
    await session
      .prepare('INSERT INTO Orders VALUES (?, ?, ?)')
      .bind(order.orderId, order.customerId, order.quantity)
      .run()

    // E. Session read-after-write query.
    // In order for the application to be correct, this SELECT statement must see
    // the results of the INSERT statement above.
    const { results } = await session
      .prepare('SELECT * FROM Orders')
      .all()

    return Response.json(results)
  }

  return new Response('Not found', { status: 404 })
}</code></pre>
            <p>To use the Session API, you first need to create a session using the <code>withSession</code> method (<b><i>step A</i></b>).  The <code>withSession</code> method takes a bookmark as a parameter, or a constraint.  The provided constraint instructs D1 where to forward the first query of the session. Using <code>first-unconstrained</code> allows the first query to be processed by any replica without any restriction on how up-to-date it is. Using <code>first-primary</code> ensures that the first query of the session will be forwarded to the primary.</p>
            <pre><code>// A. Create the session.
const bookmark = request.headers.get('x-d1-bookmark') ?? 'first-unconstrained'
const session = env.DB.withSession(bookmark)</code></pre>
            <p>Providing an explicit bookmark instructs D1 that whichever database instance processes the query has to be at least as up-to-date as the provided bookmark (in case of a replica; the primary database is always up-to-date by definition).  Explicit bookmarks are how we can continue from previously-created sessions and maintain sequential consistency across user requests.</p><p>Once you’ve created the session, make queries like you normally would with D1.  The session object ensures that the queries you make are sequentially consistent with regards to each other.</p>
            <pre><code>// C. Session read query.
const { results } = await session.prepare('SELECT * FROM Orders').all()</code></pre>
            <p>For example, in the code example above, the session read query for listing the orders (<b><i>step C</i></b>) will return results that are at least as up-to-date as the bookmark used to create the session (<b><i>step A</i></b><i>)</i>.</p><p>More interesting is the write query to add a new order (<b><i>step D</i></b>) followed by the read query to list all orders (<b><i>step E</i></b>). Because both queries are executed on the same session, it is guaranteed that the read query will observe a database copy that includes the write query, thus maintaining sequential consistency.</p>
            <pre><code>// D. Session write query.
await session
  .prepare('INSERT INTO Orders VALUES (?, ?, ?)')
  .bind(order.orderId, order.customerId, order.quantity)
  .run()

// E. Session read-after-write query.
const { results } = await session
  .prepare('SELECT * FROM Orders')
  .all()</code></pre>
            <p>Note that we could make a single batch query to the primary including both the write and the list, but the benefit of using the new Sessions API is that you can use the extra read replica databases for your read queries and allow the primary database to handle more write queries.</p><p>The session object does the necessary bookkeeping to maintain the latest bookmark observed across all queries executed using that specific session, and always includes that latest bookmark in requests to D1. Note that any query executed without using the session object is not guaranteed to be sequentially consistent with the queries executed in the session.</p><p>When possible, we suggest continuing sessions across requests by including bookmarks in your responses to clients (<b><i>step B</i></b>), and having clients passing previously received bookmarks in their future requests.</p>
            <pre><code>// B. Return the bookmark so we can continue the session in another request.
response.headers.set('x-d1-bookmark', session.getBookmark())</code></pre>
            <p>This allows <i>all</i> of a client’s requests to be in the same session. You can do this by grabbing the session’s current bookmark at the end of the request (<code>session.getBookmark()</code>) and sending the bookmark in the response back to the client in HTTP headers, in HTTP cookies, or in the response body itself.</p>
    <div>
      <h3>Consistency with and without Sessions API</h3>
      <a href="#consistency-with-and-without-sessions-api">
        
      </a>
    </div>
    <p>In this section, we will explore the classic scenario of a read-after-write query to showcase how using the new D1 Sessions API ensures that we get sequential consistency and avoid any issues with inconsistent results in our application.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1zIBf3V1YIogYJKeWm1kDn/f484faf38cc0f8d7227f9db1fa386354/1.png" />
          </figure><p>The Client, a user Worker, sends a D1 write query that gets processed by the database primary and gets the results back. However, the subsequent read query ends up being processed by a database replica. If the database replica is lagging far enough behind the database primary, such that it does not yet include the first write query, then the returned results will be inconsistent, and probably incorrect for your application business logic.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1w81ec5tNGWJ7sQyFBZQ6l/d487ccf225a097a0e48054d88df0ba1f/2.png" />
          </figure><p>Using the Sessions API fixes the inconsistency issue. The first write query is again processed by the database primary, and this time the response includes “<b>Bookmark 100</b>”. The session object will store this bookmark for you transparently.</p><p>The subsequent read query is processed by database replica as before, but now since the query includes the previously received “<b>Bookmark 100</b>”, the database replica will wait until its database copy is at least up-to-date as “<b>Bookmark 100</b>”. Only once it’s up-to-date, the read query will be processed and the results returned, including the replica’s latest bookmark “<b>Bookmark 104</b>”.</p><p>Notice that the returned bookmark for the read query is “<b>Bookmark 104</b>”, which is different from the one passed in the query request. This can happen if there were other writes from other client requests that also got replicated to the database replica in-between the two queries our own client executed.</p>
    <div>
      <h2>Enabling read replication</h2>
      <a href="#enabling-read-replication">
        
      </a>
    </div>
    <p>To start using D1 read replication:</p><ol><li><p>Update your Worker to use the D1 Sessions API to tell D1 what queries are part of the same database session. The Sessions API works with databases that do not have read replication enabled as well, so it’s safe to ship this code even before you enable replicas. Here’s <a href="http://developers.cloudflare.com/d1/best-practices/read-replication/"><u>an example</u></a>.</p></li><li><p><a href="https://developers.cloudflare.com/d1/best-practices/read-replication/#enable-read-replication"><u>Enable replicas</u></a> for your database via <a href="https://dash.cloudflare.com/?to=/:account/workers/d1"><u>Cloudflare dashboard</u></a> &gt; Select D1 database &gt; Settings.</p></li></ol><p>D1 read replication is built into D1, and you don’t pay extra storage or compute costs for replicas. You incur the exact same D1 usage with or without replicas, based on <code>rows_read</code> and <code>rows_written</code> by your queries. Unlike other traditional database systems with replication, you don’t have to manually create replicas, including where they run, or decide how to route requests between the primary database and read replicas. Cloudflare handles this when using the Sessions API while ensuring sequential consistency.</p><p>Since D1 read replication is in beta, we recommend trying D1 read replication on a non-production database first, and migrate to your production workloads after validating read replication works for your use case.</p><p>If you don’t have a D1 database and want to try out D1 read replication, <a href="https://dash.cloudflare.com/?to=/:account/workers/d1/create"><u>create a test database</u></a> in the Cloudflare dashboard.</p>
    <div>
      <h3>Observing your replicas</h3>
      <a href="#observing-your-replicas">
        
      </a>
    </div>
    <p>Once you’ve enabled D1 read replication, read queries will start to be processed by replica database instances. The response of each query includes information in the nested <code>meta</code> object relevant to read replication, like <code>served_by_region</code> and <code>served_by_primary</code>. The first denotes the region of the database instance that processed the query, and the latter will be <code>true</code> if-and-only-if your query was processed by the primary database instance.</p><p>In addition, the <a href="https://dash.cloudflare.com/?to=/:account/workers/d1/"><u>D1 dashboard overview</u></a> for a database now includes information about the database instances handling your queries. You can see how many queries are handled by the primary instance or by a replica, and a breakdown of the queries processed by region. The example screenshots below show graphs displaying the number of queries executed  and number of rows read by each region.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ChIlqQ5xgJfiftOHw9Egg/b583d00d22dcea60e7439dfbfa1761df/image10.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4Zze5y22759fOIYPOqrK1Y/6cd3c684006ca8234db20924cae8b960/image1.png" />
          </figure>
    <div>
      <h2>Under the hood: how D1 read replication is implemented</h2>
      <a href="#under-the-hood-how-d1-read-replication-is-implemented">
        
      </a>
    </div>
    <p>D1 is implemented on top of SQLite-backed Durable Objects running on top of Cloudflare’s <a href="https://blog.cloudflare.com/sqlite-in-durable-objects/#under-the-hood-storage-relay-service"><u>Storage Relay Service</u></a>.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3GWWL8goIzrGTmkH54O416/aabd47fcd94bfc73492556b19ac6069f/5.png" />
          </figure><p>D1 is structured with a 3-layer architecture.  First is the binding API layer that runs in the customer’s Worker.  Next is a stateless Worker layer that routes requests based on database ID to a layer of Durable Objects that handle the actual SQL operations behind D1.  This is similar to how <a href="https://developers.cloudflare.com/durable-objects/what-are-durable-objects/#durable-objects-in-cloudflare"><u>most applications using Cloudflare Workers and Durable Objects are structured</u></a>.</p><p>For a non-replicated database, there is exactly one Durable Object per database.  When a user’s Worker makes a request with the D1 binding for the database, that request is first routed to a D1 Worker running in the same location as the user’s Worker.  The D1 Worker figures out which D1 Durable Object backs the user’s D1 database and fetches an RPC stub to that Durable Object.  The Durable Objects routing layer figures out where the Durable Object is located, and opens an RPC connection to it.  Finally, the D1 Durable Object then handles the query on behalf of the user’s Worker using the Durable Objects SQL API.</p><p>In the Durable Objects SQL API, all queries go to a SQLite database on the local disk of the server where the Durable Object is running.  Durable Objects run <a href="https://www.sqlite.org/wal.html"><u>SQLite in WAL mode</u></a>.  In WAL mode, every write query appends to a write-ahead log (the WAL).  As SQLite appends entries to the end of the WAL file, a database-specific component called the Storage Relay Service <i>leader</i> synchronously replicates the entries to 5 <i>durability followers</i> on servers in different datacenters.  When a quorum (at least 3 out of 5) of the durability followers acknowledge that they have safely stored the data, the leader allows SQLite’s write queries to commit and opens the Durable Object’s output gate, so that the Durable Object can respond to requests.</p><p>Our implementation of WAL mode allows us to have a complete log of all of the committed changes to the database. This enables a couple of important features in SQLite-backed Durable Objects and D1:</p><ul><li><p>We identify each write with a <a href="https://en.wikipedia.org/wiki/Lamport_timestamp"><u>Lamport timestamp</u></a> we call a <a href="https://developers.cloudflare.com/d1/reference/time-travel/#bookmarks"><u>bookmark</u></a>.</p></li><li><p>We construct databases anywhere in the world by downloading all of the WAL entries from cold storage and replaying each WAL entry in order.</p></li><li><p>We implement <a href="https://developers.cloudflare.com/d1/reference/time-travel/"><u>Point-in-time recovery (PITR)</u></a> by replaying WAL entries up to a specific bookmark rather than to the end of the log.</p></li></ul><p>Unfortunately, having the main data structure of the database be a log is not ideal.  WAL entries are in write order, which is often neither convenient nor fast.  In order to cut down on the overheads of the log, SQLite <i>checkpoints</i> the log by copying the WAL entries back into the main database file.  Read queries are serviced directly by SQLite using files on disk — either the main database file for checkpointed queries, or the WAL file for writes more recent than the last checkpoint.  Similarly, the Storage Relay Service snapshots the database to cold storage so that we can replay a database by downloading the most recent snapshot and replaying the WAL from there, rather than having to download an enormous number of individual WAL entries.</p><p>WAL mode is the foundation for implementing read replication, since we can stream writes to locations other than cold storage in real time.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ezp8gcf3gXqkvumzufGfP/1a54fc6f434290968c7e695c2e5bb0c9/6.png" />
          </figure><p>We implemented read replication in 5 major steps.</p><p>First, we made it possible to make replica Durable Objects with a read-only copy of the database.  These replica objects boot by fetching the latest snapshot and replaying the log from cold storage to whatever bookmark primary database’s leader last committed. This basically gave us point-in-time replicas, since without continuous updates, the replicas never updated until the Durable Object restarted.</p><p>Second, we registered the replica leader with the primary’s leader so that the primary leader sends the replicas every entry written to the WAL at the same time that it sends the WAL entries to the durability followers.  Each of the WAL entries is marked with a bookmark that uniquely identifies the WAL entry in the sequence of WAL entries.  We’ll use the bookmark later.</p><p>Note that since these writes are sent to the replicas <i>before</i> a quorum of durability followers have confirmed them, the writes are actually unconfirmed writes, and the replica leader must be careful to keep the writes hidden from the replica Durable Object until they are confirmed.  The replica leader in the Storage Relay Service does this by implementing enough of SQLite’s <a href="https://www.sqlite.org/walformat.html#the_wal_index_file_format"><u>WAL-index protocol</u></a>, so that the unconfirmed writes coming from the primary leader look to SQLite as though it’s just another SQLite client doing unconfirmed writes.  SQLite knows to ignore the writes until they are confirmed in the log.  The upshot of this is that the replica leader can write WAL entries to the SQLite WAL <i>immediately,</i> and then “commit” them when the primary leader tells the replica that the entries have been confirmed by durability followers.</p><p>One neat thing about this approach is that writes are sent from the primary to the replica as quickly as they are generated by the primary, helping to minimize lag between replicas.  In theory, if the write query was proxied through a replica to the primary, the response back to the replica will arrive at almost the same time as the message that updates the replica.  In such a case, it looks like there’s no replica lag at all!</p><p>In practice, we find that replication is really fast.  Internally, we measure <i>confirm lag</i>, defined as the time from when a primary confirms a change to when the replica confirms a change.  The table below shows the confirm lag for two D1 databases whose primaries are in different regions.</p><div>
    <figure>
        <table>
            <colgroup>
                <col></col>
                <col></col>
                <col></col>
            </colgroup>
            <tbody>
                <tr>
                    <td>
                        <p><br /><span><span>Replica Region</span></span></p>
                    </td>
                    <td>
                        <p><span><span>Database A</span></span></p>
                        <p><span><span>(Primary region: ENAM)</span></span></p>
                    </td>
                    <td>
                        <p><span><span>Database B</span></span><br /><span><span>(Primary region: WNAM)</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>ENAM</span></span></p>
                    </td>
                    <td>
                        <p><span><span>N/A</span></span></p>
                    </td>
                    <td>
                        <p><span><span>30 ms</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>WNAM</span></span></p>
                    </td>
                    <td>
                        <p><span><span>45 ms</span></span></p>
                    </td>
                    <td>
                        <p><span><span>N/A</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>WEUR</span></span></p>
                    </td>
                    <td>
                        <p><span><span>55 ms</span></span></p>
                    </td>
                    <td>
                        <p><span><span>75 ms</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>EEUR</span></span></p>
                    </td>
                    <td>
                        <p><span><span>67 ms</span></span></p>
                    </td>
                    <td>
                        <p><span><span>75 ms</span></span></p>
                    </td>
                </tr>
            </tbody>
        </table>
    </figure>
</div><p><sup><i>Confirm lag for 2 replicated databases.  N/A means that we have no data for this combination.  The region abbreviations are the same ones used for </i></sup><a href="https://developers.cloudflare.com/durable-objects/reference/data-location/#supported-locations-1"><sup><i><u>Durable Object location hints</u></i></sup></a><sup><i>.</i></sup></p><p>The table shows that confirm lag is correlated with the network round-trip time between the data centers hosting the primary databases and their replicas.  This is clearly visible in the difference between the confirm lag for the European replicas of the two databases.  As airline route planners know, EEUR is <a href="http://www.gcmap.com/mapui?P=ewr-lhr,+ewr-waw"><u>appreciably further away</u></a> from ENAM than WEUR is, but from WNAM, both European regions (WEUR and EEUR) are <a href="http://www.gcmap.com/mapui?P=sjc-lhr,+sjc-waw"><u>about equally as far away</u></a>.  We see that in our replication numbers.</p><p>The exact placement of the D1 database in the region matters too.  Regions like ENAM and WNAM are quite large in themselves.  Database A’s placement in ENAM happens to be further away from most data centers in WNAM compared to database B’s placement in WNAM relative to the ENAM data centers.  As such, database B sees slightly lower confirm lag.</p><p>Try as we might, we can’t beat the speed of light!</p><p>Third, we updated the Durable Object routing system to be aware of Durable Object replicas.  When read replication is enabled on a Durable Object, two things happen.  First, we create a set of replicas according to a replication policy.  The current replication policy that D1 uses is simple: a static set of replicas in <a href="https://developers.cloudflare.com/d1/configuration/data-location/#available-location-hints"><u>every region that D1 supports</u></a>.  Second, we turn on a routing policy for the Durable Object.  The current policy that D1 uses is also simple: route to the Durable Object replica in the region close to where the user request is.  With this step, we have updateable read-only replicas, and can route requests to them!</p><p>Fourth, we updated D1’s Durable Object code to handle write queries on replicas. D1 uses SQLite to figure out whether a request is a write query or a read query.  This means that the determination of whether something is a read or write query happens <i>after</i> the request is routed.  Read replicas will have to handle write requests!  We solve this by instantiating each replica D1 Durable Object with a reference to its primary.  If the D1 Durable Object determines that the query is a write query, it forwards the request to the primary for the primary to handle. This happens transparently, keeping the user code simple.</p><p>As of this fourth step, we can handle read and write queries at every copy of the D1 Durable Object, whether it's a primary or not.  Unfortunately, as outlined above, if a user's requests get routed to different read replicas, they may see different views of the database, leading to a very weak consistency model.  So the last step is to implement the Sessions API across the D1 Worker and D1 Durable Object.  Recall that every WAL entry is marked with a bookmark.  These bookmarks uniquely identify a point in (logical) time in the database.  Our bookmarks are strictly monotonically increasing; every write to a database makes a new bookmark with a value greater than any other bookmark for that database.</p><p>Using bookmarks, we implement the Sessions API with the following algorithm split across the D1 binding implementation, the D1 Worker, and D1 Durable Object.</p><p>First up in the D1 binding, we have code that creates the <code>D1DatabaseSession</code> object and code within the <code>D1DatabaseSession</code> object to keep track of the latest bookmark.</p>
            <pre><code>// D1Binding is the binding code running within the user's Worker
// that provides the existing D1 Workers API and the new withSession method.
class D1Binding {
  // Injected by the runtime to the D1 Binding.
  d1Service: D1ServiceBinding

  function withSession(initialBookmark) {
    return D1DatabaseSession(this.d1Service, this.databaseId, initialBookmark);
  }
}

// D1DatabaseSession holds metadata about the session, most importantly the
// latest bookmark we know about for this session.
class D1DatabaseSession {
  constructor(d1Service, databaseId, initialBookmark) {
    this.d1Service = d1Service;
    this.databaseId = databaseId;
    this.bookmark = initialBookmark;
  }

  async exec(query) {
    // The exec method in the binding sends the query to the D1 Worker
    // and waits for the the response, updating the bookmark as
    // necessary so that future calls to exec use the updated bookmark.
    var resp = await this.d1Service.handleUserQuery(databaseId, query, bookmark);
    if (isNewerBookmark(this.bookmark, resp.bookmark)) {
      this.bookmark = resp.bookmark;
    }
    return resp;
  }

  // batch and other SQL APIs are implemented similarly.
}</code></pre>
            <p>The binding code calls into the D1 stateless Worker (<code>d1Service</code> in the snippet above), which figures out which Durable Object to use, and proxies the request to the Durable Object.</p>
            <pre><code>class D1Worker {
  async handleUserQuery(databaseId, query) {
    var doId = /* look up Durable Object for databaseId */;
    return await this.D1_DO.get(doId).handleWorkerQuery(query, bookmark)
  }
}</code></pre>
            <p>Finally, we reach the Durable Objects layer, which figures out how to actually handle the request.</p>
            <pre><code>class D1DurableObject {
  async handleWorkerQuery(queries, bookmark) {
    var bookmark = bookmark ?? "first-primary";
    var results = {};

    if (this.isPrimaryDatabase()) {
      // The primary always has the latest data so we can run the
      // query without checking the bookmark.
      var result = /* execute query directly */;
      bookmark = getCurrentBookmark();
      results = result;
    } else {
      // This is running on a replica.
      if (bookmark === "first-primary" || isWriteQuery(query)) {
        // The primary must handle this request, so we'll proxy the
        // request to the primary.
        var resp = await this.primary.handleWorkerQuery(query, bookmark);
        bookmark = resp.bookmark;
        results = resp.results;
      } else {
        // The replica can handle this request, but only after the
        // database is up-to-date with the bookmark.
        if (bookmark !== "first-unconstrained") {
          await waitForBookmark(bookmark);
        }
        var result = /* execute query locally */;
        bookmark = getCurrentBookmark();
        results = result;
      }
    }
    return { results: results, bookmark: bookmark };
  }
}</code></pre>
            <p>The D1 Durable Object first figures out if this instance can handle the query, or if the query needs to be sent to the primary.  If the Durable Object can execute the query, it ensures that we execute the query with a bookmark at least as up-to-date as the bookmark requested by the binding.</p><p>The upshot is that the three pieces of code work together to ensure that all of the queries in the session see the database in a sequentially consistent order, because each new query will be blocked until it has seen the results of previous queries within the same session.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>D1’s new read replication feature is a significant step towards making globally distributed databases easier to use without sacrificing consistency. With automatically provisioned replicas in every region, your applications can now serve read queries faster while maintaining strong sequential consistency across requests, and keeping your application Worker code simple.</p><p>We’re excited for developers to explore this feature and see how it improves the performance of your applications. The public beta is just the beginning—we’re actively refining and expanding D1’s capabilities, including evolving replica placement policies, and your feedback will help shape what’s next.</p><p>Note that the Sessions API is only available through the <a href="https://developers.cloudflare.com/d1/worker-api/"><u>D1 Worker Binding</u></a> for now, and support for the HTTP REST API will follow soon.</p><p>Try out D1 read replication today by clicking the “Deploy to Cloudflare" button, check out <a href="http://developers.cloudflare.com/d1/best-practices/read-replication/"><u>documentation and examples</u></a>, and let us know what you build in the <a href="https://discord.com/channels/595317990191398933/992060581832032316"><u>D1 Discord channel</u></a>!</p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/d1-starter-sessions-api-template"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p></p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Deep Dive]]></category>
            <category><![CDATA[Edge Database]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[SQL]]></category>
            <guid isPermaLink="false">2qUAO70BqnRBomg83fCRPe</guid>
            <dc:creator>Justin Mazzola Paluska</dc:creator>
            <dc:creator>Lambros Petrou</dc:creator>
        </item>
        <item>
            <title><![CDATA[Building D1: a Global Database]]></title>
            <link>https://blog.cloudflare.com/building-d1-a-global-database/</link>
            <pubDate>Mon, 01 Apr 2024 13:00:41 GMT</pubDate>
            <description><![CDATA[ D1, Cloudflare’s SQL database, is now generally available.  ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/76hMKeBHewbCLm4XlVS4zL/92271c25576185cad1ab5e70e29ede58/image2-33.png" />
            
            </figure><p>Developers who build Worker applications focus on what they're creating, not the infrastructure required, and benefit from the global reach of <a href="https://www.cloudflare.com/network/">Cloudflare's network</a>. Many applications require persistent data, from personal projects to business-critical workloads. Workers offer various <a href="https://developers.cloudflare.com/workers/platform/storage-options/">database and storage options</a> tailored to developer needs, such as key-value and <a href="https://www.cloudflare.com/learning/cloud/what-is-object-storage/">object storage</a>.</p><p>Relational databases are the backbone of many applications today. <a href="https://developers.cloudflare.com/d1/">D1</a>, Cloudflare's relational database complement, is now generally available. Our journey from alpha in late 2022 to GA in April 2024 focused on enabling developers to build production workloads with the familiarity of relational data and SQL.</p>
    <div>
      <h3>What’s D1?</h3>
      <a href="#whats-d1">
        
      </a>
    </div>
    <p>D1 is Cloudflare's built-in, serverless relational database. For Worker applications, D1 offers SQL's expressiveness, leveraging SQLite's SQL dialect, and developer tooling integrations, including object-relational mappers (ORMs) like <a href="https://orm.drizzle.team/docs/connect-cloudflare-d1">Drizzle ORM</a>. D1 is accessible via <a href="https://developers.cloudflare.com/d1/build-with-d1/d1-client-api/">Workers</a> or an <a href="https://developers.cloudflare.com/api/operations/cloudflare-d1-create-database">HTTP API</a>.</p><p>Serverless means no provisioning, default disaster recovery with <a href="https://developers.cloudflare.com/d1/reference/time-travel/">Time Travel</a>, and <a href="https://developers.cloudflare.com/d1/platform/pricing/">usage-based pricing</a>. D1 includes a generous free tier that allows developers to experiment with D1 and then graduate those trials to production.</p>
    <div>
      <h3>How to make data global?</h3>
      <a href="#how-to-make-data-global">
        
      </a>
    </div>
    <p>D1 GA has focused on reliability and developer experience. Now, we plan on extending D1 to better support globally-distributed applications.</p><p>In the Workers model, an incoming request invokes serverless execution in the closest data center. A Worker application can scale globally with user requests. Application data, however, remains stored in centralized databases, and global user traffic must account for access round trips to data locations. For example, a D1 database today resides in a single location.</p><p>Workers support <a href="https://developers.cloudflare.com/workers/configuration/smart-placement">Smart Placement</a> to account for frequently accessed data locality. Smart Placement invokes a Worker closer to centralized backend services like databases to lower latency and improve application performance. We’ve addressed Workers placement in global applications, but need to solve data placement.</p><p>The question, then, is how can D1, as Cloudflare’s <a href="https://www.cloudflare.com/developer-platform/products/d1/">built-in database solution</a>, better support data placement for global applications? The answer is asynchronous read replication.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1I58tQFeSOcIyqrGfv9UnB/c1bc267b8cd2eb09332ae909429aeb5b/image4-30.png" />
            
            </figure>
    <div>
      <h3>What is asynchronous read replication?</h3>
      <a href="#what-is-asynchronous-read-replication">
        
      </a>
    </div>
    <p>In a server-based database management system, like Postgres, MySQL, SQL Server, or Oracle, a <b><i>read replica</i></b> is a separate database server that serves as a read-only, almost up-to-date copy of the primary database server. An administrator creates a read replica by starting a new server from a snapshot of the primary server and configuring the primary server to send updates asynchronously to the replica server. Since the updates are asynchronous, the read replica may be behind the current state of the primary server. The difference between the primary server and a replica is called <b><i>replica lag</i></b>. It's possible to have more than one read replica.</p><p>Asynchronous read replication is a time-proven solution for improving the performance of databases:</p><ul><li><p>It's possible to increase throughput by distributing load across multiple replicas.</p></li><li><p>It's possible to lower query latency when the replicas are close to the users making queries.</p></li></ul><p>Note that some database systems also offer synchronous replication. In a synchronous replicated system, writes must wait until all replicas have confirmed the write. Synchronous replicated systems can run only as fast as the slowest replica and come to a halt when a replica fails. If we’re trying to improve performance on a global scale, we want to avoid synchronous replication as much as possible!</p>
    <div>
      <h3>Consistency models &amp; read replicas</h3>
      <a href="#consistency-models-read-replicas">
        
      </a>
    </div>
    <p>Most database systems provide <a href="https://jepsen.io/consistency/models/read-committed">read committed</a>, <a href="https://jepsen.io/consistency/models/snapshot-isolation">snapshot isolation</a>, or <a href="https://jepsen.io/consistency/models/serializable">serializable</a> consistency models, depending on their configuration. For example, Postgres <a href="https://jepsen.io/consistency/models/read-committed">defaults to read committed</a> but can be configured to use stronger modes. SQLite provides <a href="https://www.sqlite.org/draft/isolation.html">snapshot isolation in WAL mode</a>. Stronger modes like snapshot isolation or serializable are easier to program against because they limit the permitted system concurrency scenarios and the kind of concurrency race conditions the programmer has to worry about.</p><p>Read replicas are updated independently, so each replica's contents may differ at any moment. If all of your queries go to the same server, whether the primary or a read replica, your results should be consistent according to whatever <a href="https://jepsen.io/consistency">consistency model</a> your underlying database provides. If you're using a read replica, the results may just be a little old.</p><p>In a server-based database with read replicas, it's important to stick with the same server for all of the queries in a session. If you switch among different read replicas in the same session, you compromise the consistency model provided by your application, which may violate your assumptions about how the database acts and cause your application to return incorrect results!</p><p><b>Example</b>For example, there are two replicas, A and B. Replica A lags the primary database by 100ms, and replica B lags the primary database by 2s. Suppose a user wishes to:</p><ol><li><p>Execute query 11a. Do some computation based on query 1 results</p></li><li><p>Execute query 2 based on the results of the computation in (1a)</p></li></ol><p>At time t=10s, query 1 goes to replica A and returns. Query 1 sees what the primary database looked like at t=9.9s. Suppose it takes 500ms to do the computation, so at t=10.5s, query 2 goes to replica B. Remember, replica B lags the primary database by 2s, so at t=10.5s, query 2 sees what the database looks like at t=8.5s. As far as the application is concerned, the results of query 2 look like the database has gone backwards in time!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2R1p29j20c7szuRmY2Sjlp/52e4982c6c45e18c4d0c18835931b016/image3-34.png" />
            
            </figure><p>Formally, this is <a href="https://jepsen.io/consistency/models/read-committed">read committed</a> consistency since your queries will only see committed data, but there’s no other guarantee - not even that you can read your own writes. While read committed is a valid consistency model, it’s hard to reason about all of the possible race conditions the read committed model allows, making it difficult to write applications correctly.</p>
    <div>
      <h3>D1’s consistency model &amp; read replicas</h3>
      <a href="#d1s-consistency-model-read-replicas">
        
      </a>
    </div>
    <p>By default, D1 provides the <a href="https://jepsen.io/consistency/models/snapshot-isolation">snapshot isolation</a> that SQLite provides.</p><p>Snapshot isolation is a familiar consistency model that most developers find easy to use. We implement this consistency model in D1 by ensuring at most one active copy of the D1 database and routing all HTTP requests to that single database. While ensuring that there's at most one active copy of the D1 database is a gnarly distributed systems problem, it's one that we’ve solved by building D1 using <a href="https://developers.cloudflare.com/durable-objects/">Durable Objects</a>. Durable Objects guarantee global uniqueness, so once we depend on Durable Objects, routing HTTP requests is easy: just send them to the D1 Durable Object.</p><p>This trick doesn't work if you have multiple active copies of the database since there's no 100% reliable way to look at a generic incoming HTTP request and route it to the same replica 100% of the time. Unfortunately, as we saw in the previous section's example, if we don't route related requests to the same replica 100% of the time, the best consistency model we can provide is read committed.</p><p>Given that it's impossible to route to a particular replica consistently, another approach is to route requests to any replica and ensure that the chosen replica responds to requests according to a consistency model that "makes sense" to the programmer. If we're willing to include a <a href="https://en.wikipedia.org/wiki/Lamport_timestamp">Lamport timestamp</a> in our requests, we can implement <a href="https://jepsen.io/consistency/models/sequential">sequential consistency</a> using any replica. The sequential consistency model has important properties like "<a href="https://jepsen.io/consistency/models/read-your-writes">read my own writes</a>" and "<a href="https://jepsen.io/consistency/models/writes-follow-reads">writes follow reads</a>," as well as a total ordering of writes. The total ordering of writes means that every replica will see transactions commit in the same order, which is exactly the behavior we want in a transactional system. Sequential consistency comes with the caveat that any individual entity in the system may be arbitrarily out of date, but that caveat is a feature for us because it allows us to consider replica lag when designing our APIs.</p><p>The idea is that if D1 gives applications a Lamport timestamp for every database query and those applications tell D1 the last Lamport timestamp they've seen, we can have each replica determine how to make queries work according to the sequential consistency model.</p><p>A robust, yet simple, way to implement sequential consistency with replicas is to:</p><ul><li><p>Associate a Lamport timestamp with every single request to the database. A monotonically increasing commit token works well for this.</p></li><li><p>Send all write queries to the primary database to ensure the total ordering of writes.</p></li><li><p>Send read queries to any replica, but have the replica delay servicing the query until the replica receives updates from the primary database that are later than the Lamport timestamp in the query.</p></li></ul><p>What's nice about this implementation is that it's fast in the common case where a read-heavy workload always goes to the same replica and will work even if requests get routed to different replicas.</p>
    <div>
      <h3><b><i>Sneak Preview:</i></b> bringing read replication to D1 with Sessions</h3>
      <a href="#sneak-preview-bringing-read-replication-to-d1-with-sessions">
        
      </a>
    </div>
    <p>To bring read replication to D1, we will expand the D1 API with a new concept: <b>Sessions</b>. A Session encapsulates all the queries representing one logical session for your application. For example, a Session might represent all requests coming from a particular web browser or all requests coming from a mobile app. If you use Sessions, your queries will use whatever copy of the D1 database makes the most sense for your request, be that the primary database or a nearby replica. D1's Sessions implementation will ensure sequential consistency for all queries in the Session.</p><p>Since the Sessions API changes D1's consistency model, developers must opt-in to the new API. Existing D1 API methods are unchanged and will still have the same snapshot isolation consistency model as before. However, only queries made using the new Sessions API will use replicas.</p><p>Here’s an example of the D1 Sessions API:</p>
            <pre><code>export default {
  async fetch(request: Request, env: Env) {
    // When we create a D1 Session, we can continue where we left off
    // from a previous Session if we have that Session's last commit
    // token.  This Worker will return the commit token back to the
    // browser, so that it can send it back on the next request to
    // continue the Session.
    //
    // If we don't have a commit token, make the first query in this
    // session an "unconditional" query that will use the state of the
    // database at whatever replica we land on.
    const token = request.headers.get('x-d1-token') ?? 'first-unconditional'
    const session = env.DB.withSession(token)


    // Use this Session for all our Workers' routes.
    const response = await handleRequest(request, session)


    if (response.status === 200) {
      // Set the token so we can continue the Session in another request.
      response.headers.set('x-d1-token', session.latestCommitToken)
    }
    return response
  }
}


async function handleRequest(request: Request, session: D1DatabaseSession) {
  const { pathname } = new URL(request.url)


  if (pathname === '/api/orders/list') {
    // This statement is a read query, so it will execute on any
    // replica that has a commit equal or later than `token` we used
    // to create the Session.
    const { results } = await session.prepare('SELECT * FROM Orders').all()


    return Response.json(results)
  } else if (pathname === '/api/orders/add') {
    const order = await request.json&lt;Order&gt;()


    // This statement is a write query, so D1 will send the query to
    // the primary, which always has the latest commit token.
    await session
      .prepare('INSERT INTO Orders VALUES (?, ?, ?)')
      .bind(order.orderName, order.customer, order.value)
      .run()


    // In order for the application to be correct, this SELECT
    // statement must see the results of the INSERT statement above.
    // The Session API keeps track of commit tokens for queries
    // within the session and will ensure that we won't execute this
    // query until whatever replica we're using has seen the results
    // of the INSERT.
    const { results } = await session
      .prepare('SELECT COUNT(*) FROM Orders')
      .all()


    return Response.json(results)
  }


  return new Response('Not found', { status: 404 })
}</code></pre>
            <p>D1’s implementation of Sessions makes use of commit tokens.  Commit tokens identify a particular committed query to the database.  Within a session, D1 will use commit tokens to ensure that queries are sequentially ordered.  In the example above, the D1 session ensures that the “SELECT COUNT(*)” query happens <i>after</i> the “INSERT” of the new order, <i>even if</i> we switch replicas between the awaits.  </p><p>There are several options on how you want to start a session in a Workers fetch handler.  <code>db.withSession(&lt;condition&gt;)</code> accepts these arguments:</p><table><colgroup><col></col><col></col></colgroup><tbody><tr><td><p><span><b><code>condition</code> argument</b></span></p></td><td><p><span><b>Behavior</b></span></p></td></tr><tr><td><p><span><code>&lt;commit_token&gt;</code></span></p></td><td><p><span>(1) starts Session as of given commit token</span></p><p><span>(2) subsequent queries have sequential consistency</span></p></td></tr><tr><td><p><span><code>first-unconditional</code></span></p></td><td><p><span>(1) if the first query is read, read whatever current replica has and use the commit token of that read as the basis for subsequent queries.  If the first query is a write, forward the query to the primary and use the commit token of the write as the basis for subsequent queries.</span></p><p><span>(2) subsequent queries have sequential consistency</span></p></td></tr><tr><td><p><span><code>first-primary</code></span></p></td><td><p><span>(1) runs first query, read or write, against the primary</span></p><p><span>(2) subsequent queries have sequential consistency</span></p></td></tr><tr><td><p><span><code>null</code> or missing argument</span></p></td><td><p><span>treated like <code>first-unconditional</code> </span></p></td></tr></tbody></table><p>It’s possible to have a session span multiple requests by “round-tripping” the commit token from the last query of the session and using it to start a new session.  This enables individual user agents, like a web app or a mobile app, to make sure that all of the queries the user sees are sequentially consistent.</p><p>D1’s read replication will be built-in, will not incur extra usage or storage costs, and will require no replica configuration. Cloudflare will <a href="https://www.cloudflare.com/application-services/solutions/app-performance-monitoring/">monitor</a> an application’s D1 traffic and automatically create database replicas to spread user traffic across multiple servers in locations closer to users. Aligned with our serverless model, D1 developers shouldn’t worry about replica provisioning and management. Instead, developers should focus on designing applications for replication and data consistency tradeoffs.</p><p>We’re actively working on global read replication and realizing the above proposal (share feedback In the <a href="https://discord.cloudflare.com/">#d1 channel</a> on our Developer Discord). Until then, D1 GA includes several exciting new additions.</p>
    <div>
      <h3>Check out D1 GA</h3>
      <a href="#check-out-d1-ga">
        
      </a>
    </div>
    <p>Since D1’s open beta in October 2023, we’ve focused on D1’s reliability, scalability, and developer experience demanded of critical services. We’ve invested in several new features that allow developers to build and debug applications faster with D1.</p><p><b>Build bigger with larger databases</b>We’ve listened to developers who requested larger databases. D1 now supports up to 10 GB databases, with 50K databases on the Workers Paid plan. With D1’s horizontal scaleout, applications can model database-per-business-entity use cases. Since beta, new D1 databases process 40x more requests than D1 alpha databases in a given period.</p><p><b>Import &amp; export bulk data</b>Developers import and export data for multiple reasons:</p><ul><li><p>Database migration testing to/from different database systems</p></li><li><p>Data copies for local development or testing</p></li><li><p>Manual backups for custom requirements like compliance</p></li></ul><p>While you could execute SQL files against D1 before, we’re improving <code>wrangler d1 execute –file=&lt;filename&gt;</code> to ensure large imports are atomic operations, never leaving your database in a halfway state. <code>wrangler d1 execute</code> also now defaults to local-first to protect your remote production database.</p><p>To import our <a href="https://github.com/cloudflare/d1-northwind/tree/main">Northwind Traders</a> demo database, you can download the <a href="https://github.com/cloudflare/d1-northwind/blob/main/db/schema.sql">schema</a> &amp; <a href="https://github.com/cloudflare/d1-northwind/blob/main/db/data.sql">data</a> and execute the SQL files.</p>
            <pre><code>npx wrangler d1 create northwind-traders

# omit --remote to run on a local database for development
npx wrangler d1 execute northwind-traders --remote --file=./schema.sql

npx wrangler d1 execute northwind-traders --remote --file=./data.sql</code></pre>
            <p>D1 database data &amp; schema, schema-only, or data-only can be exported to a SQL file using:</p>
            <pre><code># database schema &amp; data
npx wrangler d1 export northwind-traders --remote --output=./database.sql

# single table schema &amp; data
npx wrangler d1 export northwind-traders --remote --table='Employee' --output=./table.sql

# database schema only
npx wrangler d1 export &lt;database_name&gt; --remote --output=./database-schema.sql --no-data=true</code></pre>
            <p><b>Debug query performance</b>Understanding SQL query performance and debugging slow queries is a crucial step for production workloads. We’ve added the experimental <a href="https://developers.cloudflare.com/d1/observability/metrics-analytics/#query-insights"><code>wrangler d1 insights</code></a> to help developers analyze query performance metrics also available via <a href="https://developers.cloudflare.com/d1/observability/metrics-analytics/">GraphQL API</a>.</p>
            <pre><code># To find top 10 queries by average execution time:
npx wrangler d1 insights &lt;database_name&gt; --sort-type=avg --sort-by=time --count=10</code></pre>
            <p><b>Developer tooling</b>Various <a href="https://developers.cloudflare.com/d1/reference/community-projects">community developer projects</a> support D1. New additions include <a href="https://developers.cloudflare.com/d1/tutorials/d1-and-prisma-orm">Prisma ORM</a>, in version 5.12.0, which now supports Workers and D1.</p>
    <div>
      <h3>Next steps</h3>
      <a href="#next-steps">
        
      </a>
    </div>
    <p>The features available now with GA and our global read replication design are just the start of delivering the SQL database needs for developer applications. If you haven’t yet used D1, you can <a href="https://developers.cloudflare.com/d1/get-started/">get started</a> right now, visit D1’s <a href="https://developers.cloudflare.com/d1/">developer documentation</a> to spark some ideas, or <a href="https://discord.cloudflare.com/">join the #d1 channel</a> on our Developer Discord to talk to other D1 developers and our product engineering team.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2dTCMeWMaQjhBd1SM8hM6O/2cbe9ec1a7a4fb0c061afe0e1c0bf666/image1-35.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Database]]></category>
            <guid isPermaLink="false">6y8LbpExPriYEVMgzCDp4B</guid>
            <dc:creator>Vy Ton</dc:creator>
            <dc:creator>Justin Mazzola Paluska</dc:creator>
        </item>
    </channel>
</rss>