
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Tue, 14 Apr 2026 23:14:23 GMT</lastBuildDate>
        <item>
            <title><![CDATA[LangChain Support for Workers AI, Vectorize and D1]]></title>
            <link>https://blog.cloudflare.com/langchain-support-for-workers-ai-vectorize-and-d1/</link>
            <pubDate>Wed, 31 Jan 2024 14:00:12 GMT</pubDate>
            <description><![CDATA[ During Developer Week, we announced LangChain support for Cloudflare Workers. Since then, we’ve been working with the LangChain team on deeper integration of many tools across Cloudflare’s developer platform and are excited to share what we’ve been up to ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7vfmv2IUCSWhFxb0iIhRwp/d0ac73a938febe9bc24538979871acba/X2uuTU5jqOf4kskV9IUy6-EFJtW1QL7NCTeaIK1Ezs29fHv5rxii32xZ_eAu-9IHMQQhzevrxEeUR4Zq5_C0Y_HmIciI-GZaj-RbEnRI4vnshmYAV6jymeq1KXQr.png" />
            
            </figure><p>During Developer Week, we announced <a href="/langchain-and-cloudflare/">LangChain support for Cloudflare Workers</a>. Langchain is an open-source framework that allows developers to create powerful AI workflows by combining different models, providers, and plugins using a declarative API — and it dovetails perfectly with Workers for creating full stack, AI-powered applications.</p><p>Since then, we’ve been working with the LangChain team on deeper integration of many tools across Cloudflare’s developer platform and are excited to share what we’ve been up to.</p><p>Today, we’re announcing five new key integrations with LangChain:</p><ol><li><p><a href="https://js.langchain.com/docs/integrations/chat/cloudflare_workersai">Workers AI Chat Models</a>: This allows you to use <a href="https://developers.cloudflare.com/workers-ai/models/text-generation/">Workers AI text generation</a> to power your chat model within your LangChain.js application.</p></li><li><p><a href="https://js.langchain.com/docs/integrations/llms/cloudflare_workersai">Workers AI Instruct Models</a>: This allows you to use Workers AI models fine-tuned for instruct use-cases, such as Mistral and CodeLlama, inside your Langchain.js application.</p></li><li><p><a href="https://js.langchain.com/docs/integrations/text_embedding/cloudflare_ai">Text Embeddings Models</a>: If you’re working with text embeddings, you can now use <a href="https://developers.cloudflare.com/workers-ai/models/text-embeddings/">Workers AI text embeddings</a> with LangChain.js.</p></li><li><p><a href="https://js.langchain.com/docs/integrations/vectorstores/cloudflare_vectorize">Vectorize Vector Store</a>: When working with a Vector database and LangChain.js, you now have the option of using <a href="https://developers.cloudflare.com/vectorize/">Vectorize</a>, Cloudflare’s powerful vector database.</p></li><li><p><a href="https://js.langchain.com/docs/integrations/chat_memory/cloudflare_d1">Cloudflare D1-Backed Chat Memory</a>: For longer-term persistence across chat sessions, you can swap out LangChain’s default in-memory chatHistory that backs chat memory classes like BufferMemory for a <a href="https://developers.cloudflare.com/d1/">Cloudflare D1 instance</a>.</p></li></ol><p>With the addition of these five Cloudflare AI tools into LangChain, developers have powerful new primitives to integrate into new and existing AI applications. With LangChain’s expressive tooling for mixing and matching AI tools and models, you can use Vectorize, Cloudflare AI’s text embedding and generation models, and <a href="https://www.cloudflare.com/developer-platform/products/d1/">Cloudflare D1 </a>to build a fully-featured AI application in just a few lines of code.</p><blockquote><p>This is a full persistent chat app powered by an LLM in 10 lines of code–deployed to <a href="https://twitter.com/Cloudflare?ref_src=twsrc%5Etfw">@Cloudflare</a> Workers, powered by <a href="https://twitter.com/LangChainAI?ref_src=twsrc%5Etfw">@LangChainAI</a> and <a href="https://twitter.com/Cloudflare?ref_src=twsrc%5Etfw">@Cloudflare</a> D1.</p><p>You can even pass in a unique sessionId and have completely user/session-specific conversations 🤯 <a href="https://t.co/le9vbMZ7Mc">https://t.co/le9vbMZ7Mc</a> <a href="https://t.co/jngG3Z7NQ6">pic.twitter.com/jngG3Z7NQ6</a></p><p>— Kristian Freeman (@kristianf_) <a href="https://twitter.com/kristianf_/status/1704592544099631243?ref_src=twsrc%5Etfw">September 20, 2023</a></p></blockquote>
    <div>
      <h3>Getting started with a Cloudflare + LangChain + Nuxt Multi-source Chatbot template</h3>
      <a href="#getting-started-with-a-cloudflare-langchain-nuxt-multi-source-chatbot-template">
        
      </a>
    </div>
    <p>You can get started by using LangChain’s Cloudflare Chatbot template: <a href="https://github.com/langchain-ai/langchain-cloudflare-nuxt-template">https://github.com/langchain-ai/langchain-cloudflare-nuxt-template</a></p><p>This application shows how various pieces of Cloudflare Workers AI fit together and expands on the concept of <a href="https://developers.cloudflare.com/workers-ai/tutorials/build-a-retrieval-augmented-generation-ai/">retrieval augmented generation (RAG)</a> to build a conversational retrieval system that can route between multiple data sources, choosing the one more relevant based on the incoming question. This method helps cut down on distraction from off-topic documents getting pulled in by a vector store’s similarity search, which could occur if only a single database were used.</p><p>The base version runs entirely on the Cloudflare Workers AI stack with the Llama 2-7B model. It uses:</p><ul><li><p>A chat variant of Llama 2-7B run on Cloudflare Workers AI</p></li><li><p>A Cloudflare Workers AI embeddings model</p></li><li><p>Two different Cloudflare Vectorize DBs (though you could add more!)</p></li><li><p>Cloudflare Pages for hosting</p></li><li><p>LangChain.js for orchestration</p></li><li><p>Nuxt + Vue for the frontend</p></li></ul><p>The two default data sources are <a href="https://www.cloudflare.com/resources/assets/slt3lc6tev37/3HWObubm6fybC0FWUdFYAJ/5d5e3b0a4d9c5a7619984ed6076f01fe/Cloudflare_for_Campaigns_Security_Guide.pdf">a PDF detailing some of Cloudflare's features</a> and <a href="https://lilianweng.github.io/posts/2023-06-23-agent/">a blog post by Lilian Weng at OpenAI</a> that talks about autonomous agents.</p><p>The bot will classify incoming questions as being about Cloudflare, AI, or neither, and draw on the corresponding data source for more targeted results. Everything is fully customizable - you can change the content of the ingested data, the models used, and all prompts!</p><p>And if you have access to the <a href="https://smith.langchain.com/">LangSmith</a> beta, the app also has tracing set up so that you can easily <a href="https://smith.langchain.com/public/24807f4a-4335-497e-bfbf-3a1de019b22e/r">see and debug each step</a> in the application.</p>
    <div>
      <h3>We can’t wait to see what you build</h3>
      <a href="#we-cant-wait-to-see-what-you-build">
        
      </a>
    </div>
    <p>We can't wait to see what you all build with LangChain and Cloudflare. Come tell us about it in <a href="https://discord.cloudflare.com/">discord</a> or on our <a href="https://community.cloudflare.com/">community forums</a>.</p> ]]></content:encoded>
            <category><![CDATA[LangChain]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">5JnxNQ7W8w3O5d2wAroT18</guid>
            <dc:creator>Ricky Robinett</dc:creator>
            <dc:creator>Kristian Freeman</dc:creator>
            <dc:creator>Jacob Lee (Guest Author)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Magic in minutes: how to build a ChatGPT plugin with Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/magic-in-minutes-how-to-build-a-chatgpt-plugin-with-cloudflare-workers/</link>
            <pubDate>Fri, 12 May 2023 19:05:00 GMT</pubDate>
            <description><![CDATA[ Announcing our new Quickstart example for building ChatGPT Plugins with Cloudflare Workers ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today, we're open-sourcing our <a href="https://github.com/cloudflare/chatgpt-plugin/tree/main/example-plugin">ChatGPT Plugin Quickstart repository for Cloudflare Workers</a>, designed to help you build awesome and versatile plugins for ChatGPT with ease. If you don’t already know, ChatGPT is a conversational AI model from <a href="https://www.openai.com/">OpenAI</a> which has an uncanny ability to take chat input and generate human-like text responses.</p><p>With the recent addition of <a href="https://www.cloudflare.com/learning/ai/chatgpt-plugins/">ChatGPT plugins</a>, developers can create custom extensions and integrations to make ChatGPT even more powerful. Developers can now provide custom flows for ChatGPT to integrate into its conversational workflow – for instance, the ability to look up products when asking questions about shopping, or retrieving information from an <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api/">API</a> in order to have up-to-date data when working through a problem.</p><p>That's why we're super excited to contribute to the growth of ChatGPT plugins with our new Quickstart template. Our goal is to make it possible to build and deploy a new ChatGPT plugin to production in minutes, so developers can focus on creating incredible conversational experiences tailored to their specific needs.</p>
    <div>
      <h2>How it works</h2>
      <a href="#how-it-works">
        
      </a>
    </div>
    <p>Our Quickstart is designed to work seamlessly with <a href="https://workers.cloudflare.com">Cloudflare Workers</a>. Under the hood, it uses our command-line tool <a href="https://developers.cloudflare.com/workers/cli-wrangler"><code>wrangler</code></a> to create a new project and deploy it to Workers.</p><p>When building a ChatGPT plugin, there are three things you need to consider:</p><ol><li><p>The plugin's metadata, which includes the plugin's name, description, and other info</p></li><li><p>The plugin's schema, which defines the plugin's input and output</p></li><li><p>The plugin's behavior, which defines how the plugin responds to user input</p></li></ol><p>To handle all of these parts in a simple, easy-to-understand API, we've created the <a href="https://www.npmjs.com/package/@cloudflare/itty-router-openapi"><code>@cloudflare/itty-router-openapi</code> package</a>, which makes it easy to manage your plugin's metadata, schema, and behavior. This package is included in the ChatGPT Plugin Quickstart, so you can get started right away.</p><p>To show how the package works, we'll look at two key files in the ChatGPT Plugin Quickstart: <code>index.js</code> and <code>search.js</code>. The <code>index.js</code> file contains the plugin's metadata and schema, while the <code>search.js</code> file contains the plugin's behavior. Let's take a look at each of these files in more detail.</p><p>In <code>index.js</code>, we define the plugin's metadata and schema. The metadata includes the plugin's name, description, and version, while the schema defines the plugin's input and output.</p><p>The configuration matches the definition required by <a href="https://platform.openai.com/docs/plugins/getting-started/plugin-manifest">OpenAI's plugin manifest</a>, and helps ChatGPT understand what your plugin is, and what purpose it serves.</p><p>Here's what the <code>index.js</code> file looks like:</p>
            <pre><code>import { OpenAPIRouter } from "@cloudflare/itty-router-openapi";
import { GetSearch } from "./search";

export const router = OpenAPIRouter({
  schema: {
    info: {
      title: 'GitHub Repositories Search API',
      description: 'A plugin that allows the user to search for GitHub repositories using ChatGPT',
      version: 'v0.0.1',
    },
  },
  docs_url: '/',
  aiPlugin: {
    name_for_human: 'GitHub Repositories Search',
    name_for_model: 'github_repositories_search',
    description_for_human: "GitHub Repositories Search plugin for ChatGPT.",
    description_for_model: "GitHub Repositories Search plugin for ChatGPT. You can search for GitHub repositories using this plugin.",
    contact_email: 'support@example.com',
    legal_info_url: 'http://www.example.com/legal',
    logo_url: 'https://workers.cloudflare.com/resources/logo/logo.svg',
  },
})

router.get('/search', GetSearch)

// 404 for everything else
router.all('*', () =&gt; new Response('Not Found.', { status: 404 }))

export default {
  fetch: router.handle
}</code></pre>
            <p>In the <code>search.js</code> file, we define the plugin's behavior. This is where we define how the plugin responds to user input. It also defines the plugin's schema, which ChatGPT uses to validate the plugin's input and output.</p><p>Importantly, this doesn't just define the <i>implementation</i> of the code. It also automatically generates an OpenAPI schema that helps ChatGPT understand how your code works -- for instance, that it takes a parameter "q", that it is of "String" type, and that it can be described as "The query to search for". With the schema defined, the <code>handle</code> function makes any relevant parameters available as function arguments, to implement the logic of the endpoint as you see fit.</p><p>Here's what the <code>search.js</code> file looks like:</p>
            <pre><code>import { ApiException, OpenAPIRoute, Query, ValidationError } from "@cloudflare/itty-router-openapi";

export class GetSearch extends OpenAPIRoute {
  static schema = {
    tags: ['Search'],
    summary: 'Search repositories by a query parameter',
    parameters: {
      q: Query(String, {
        description: 'The query to search for',
        default: 'cloudflare workers'
      }),
    },
    responses: {
      '200': {
        schema: {
          repos: [
            {
              name: 'itty-router-openapi',
              description: 'OpenAPI 3 schema generator and validator for Cloudflare Workers',
              stars: '80',
              url: 'https://github.com/cloudflare/itty-router-openapi',
            }
          ]
        },
      },
    },
  }

  async handle(request: Request, env, ctx, data: Record&lt;string, any&gt;) {
    const url = `https://api.github.com/search/repositories?q=${data.q}`

    const resp = await fetch(url, {
      headers: {
        'Accept': 'application/vnd.github.v3+json',
        'User-Agent': 'RepoAI - Cloudflare Workers ChatGPT Plugin Example'
      }
    })

    if (!resp.ok) {
      return new Response(await resp.text(), { status: 400 })
    }

    const json = await resp.json()

    // @ts-ignore
    const repos = json.items.map((item: any) =&gt; ({
      name: item.name,
      description: item.description,
      stars: item.stargazers_count,
      url: item.html_url
    }))

    return {
      repos: repos
    }
  }
}</code></pre>
            <p>The quickstart smooths out the entire development process, so you can focus on crafting custom behaviors, endpoints, and features for your ChatGPT plugins without getting caught up in the nitty-gritty. If you aren't familiar with API schemas, this also means that you can rely on our schema and manifest generation tools to handle the complicated bits, and focus on the implementation to build your plugin.</p><p>Besides making development a breeze, it's worth noting that you're also deploying to Workers, which takes advantage of Cloudflare's vast global network. This means your ChatGPT plugins enjoy low-latency access and top-notch performance, no matter where your users are located. By combining the strengths of Cloudflare Workers with the versatility of ChatGPT plugins, you can create conversational AI tools that are not only powerful and scalable but also cost-effective and globally accessible.</p>
    <div>
      <h2>Example</h2>
      <a href="#example">
        
      </a>
    </div>
    <p>To demonstrate the capabilities of our quickstarts, we've created two example ChatGPT plugins. The first, which we reviewed above, connects ChatGPT with the GitHub Repositories Search API. This plugin enables users to search for repositories by simply entering a search term, returning useful information such as the repository's name, description, star count, and URL.</p><p>One intriguing aspect of this example is the property where the plugin could go beyond basic querying. For instance, when asked "What are the most popular JavaScript projects?", ChatGPT was able to intuitively understand the user's intent and craft a new query parameter for querying both by the number of stars (measuring popularity), and the specific programming language (JavaScript) without requiring any explicit prompting. This showcases the power and adaptability of ChatGPT plugins when integrated with external APIs, providing more insightful and context-aware responses.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/zNnhkg7XeayfWqrmncoGV/d73078e55a3274613810871371498fba/1847-2.png" />
            
            </figure><p>The second plugin uses the <a href="https://pirate-weather.apiable.io/">Pirate Weather API</a> to retrieve up-to-date weather information. Remarkably, OpenAI is able to translate the request for a specific location (for instance, “Seattle, Washington”) into longitude and latitude values – which the Pirate Weather API uses for lookups – and make the correct API request, without the user needing to do any additional work.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/12clMhHEuroetJ1P3wqTgP/ceb2fc9984a193ff1dde09a671246864/1847-4.png" />
            
            </figure><p>With our ChatGPT Plugin Quickstarts, you can create custom plugins that connect to any API, database, or other data source, giving you the power to create ChatGPT plugins that are as unique and versatile as your imagination. The possibilities are endless, opening up a whole new world of conversational AI experiences tailored to specific domains and use cases.</p>
    <div>
      <h2>Get started today</h2>
      <a href="#get-started-today">
        
      </a>
    </div>
    <p><a href="https://github.com/cloudflare/chatgpt-plugin/tree/main/example-plugin">The ChatGPT Plugin Quickstarts</a> don’t just make development a snap—it also offers seamless deployment and scaling thanks to Cloudflare Workers. With the generous free plan provided by Workers, you can deploy your plugin quickly and scale it infinitely as needed.</p><p>Our ChatGPT Plugin Quickstarts are all about sparking creativity, speeding up development, and empowering developers to create amazing conversational AI experiences. By leveraging Cloudflare Workers' robust infrastructure and our streamlined tooling, you can easily build, deploy, and scale custom ChatGPT plugins, unlocking a world of endless possibilities for conversational AI applications.</p><p>Whether you're crafting a virtual assistant, a customer support bot, a language translator, or any other conversational AI tool, our ChatGPT Plugin Quickstarts are a great place to start. We're excited to provide this Quickstart, and would love to see what you build with it. <a href="https://discord.com/invite/cloudflaredev">Join us</a> in our Discord community to share what you're working on!</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[ChatGPT]]></category>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">4iStRh6jdgrcRyOkbG9E74</guid>
            <dc:creator>Kristian Freeman</dc:creator>
        </item>
        <item>
            <title><![CDATA[How we built an open-source SEO tool using Workers, D1, and Queues]]></title>
            <link>https://blog.cloudflare.com/how-we-built-an-open-source-seo-tool-using-workers-d1-and-queues/</link>
            <pubDate>Thu, 02 Mar 2023 15:03:54 GMT</pubDate>
            <description><![CDATA[ In this blog post, I’m excited to show off some of the new tools in Cloudflare’s developer arsenal, D1 and Queues, to prototype and ship an internal tool for our SEO experts at Cloudflare. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Building applications on Cloudflare Workers has always been fun. Workers applications have low latency response times by default, and easy developer ergonomics thanks to Wrangler. It's no surprise that for years now, developers have been going from idea to production with Workers in just a few minutes.</p><p>Internally, we're no different. When a member of our team has a project idea, we often reach for Workers first, and not just for the MVP stage, but in production, too. Workers have been a secret ingredient to Cloudflare’s innovation for some time now, allowing us to build products like Access, Stream and Workers KV. Even better, when we have new ideas <i>and</i> we can use new Cloudflare products to build them, it's a great way to give feedback on those products.</p><p>We've discussed this in the past on the Cloudflare blog - in May last year, <a href="/new-dev-docs/">I wrote how we rebuilt Cloudflare's developer documentation</a> using many of the tools that had recently been released in the Workers ecosystem: Cloudflare Pages for hosting, and Bulk Redirects for the redirect rules. In November, <a href="/building-a-better-developer-experience-through-api-documentation/">we released a new version of our API documentation</a>, which again used Pages for hosting, and Pages functions for intelligent caching and transformation of our API schema.</p><p>In this blog post, I’m excited to show off some of the new tools in Cloudflare’s developer arsenal, <a href="https://www.cloudflare.com/developer-platform/products/d1/">D1</a> and <a href="/introducing-cloudflare-queues/">Queues</a>, to prototype and ship an internal tool for our SEO experts at Cloudflare. We've made this project, which we're calling Prospector, open-source too - check it out in our <code>[cloudflare/templates](https://github.com/cloudflare/workers-sdk/tree/main/templates/worker-prospector)</code> repo on GitHub. Whether you're a developer looking to understand how to use multiple parts of Cloudflare's developer stack together, or an SEO specialist who may want to deploy the tool in production, we've made it incredibly easy to get up and running.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6AbvyVpEkBfnOITizlfkhT/73156db0a0fe274ead622a677b6eb959/image1.png" />
            
            </figure>
    <div>
      <h2>What we're building</h2>
      <a href="#what-were-building">
        
      </a>
    </div>
    <p>Prospector is a tool that allows Cloudflare's SEO experts to monitor our blog and marketing site for specific keywords. When a keyword is matched on a page, Prospector will notify an email address. This allows our SEO experts to stay informed of any changes to our website, and take action accordingly.</p><p><a href="/sending-email-from-workers-with-mailchannels/">Using MailChannels' integration with Workers</a>, we can quickly and easily send emails from our application using a single API call. This allows us to focus on the core functionality of the application, and not worry about the details of sending emails.</p><p>Prospector uses Cloudflare Workers as the user-facing API for the application. It uses D1 to store and retrieve data in real-time, and Queues to handle the fetching of all URLs and the notification process. We've also included an intuitive user interface for the application, which is built with HTML, CSS, and JavaScript.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/339I5LA7CAciIdEOyfKkl3/fc05a5f4b5d41ef794d638df5e41d1fb/image3-1.png" />
            
            </figure>
    <div>
      <h2>Why we built it</h2>
      <a href="#why-we-built-it">
        
      </a>
    </div>
    <p>It is widely known in SEO that both internal and external links help Google and other search engines understand what a website is about, which impacts keyword rankings. Not only do these links guide readers to additional helpful information, they also allow <a href="https://www.cloudflare.com/learning/bots/what-is-a-web-crawler/">web crawlers</a> for search engines to discover and index content on the site.</p><p>Acquiring external links is often a time-consuming process and at the discretion of third parties, whereas website owners typically have much more control over internal links. As a result, internal linking is one of the most useful levers available in SEO.</p><p>In an ideal world, every piece of content would be fully formed upon publication, replete with helpful internal links throughout the piece. However, this is often not the case. Many times, content is edited after the fact or additional pieces of relevant content come along after initial publication. These situations result in missed opportunities for internal linking.</p><p>Like other large organizations, Cloudflare has published thousands of blogs and web pages over the years. We share new content every time a product/technology is introduced and improved. Ultimately, that also means it's become more challenging to identify opportunities for internal linking in a timely, automated fashion. We needed a tool that would allow us to identify internal linking opportunities as they appear, and speed up the time it takes to identify new internal linking opportunities.</p><p>Although we tested several tools that might solve this problem, we found that they were limited in several ways. First, some tools only scanned the first 2,000 characters of a web page. Any opportunities found beyond that limit would not be detected. Next, some tools did not allow us to limit searches to certain areas of the site and resulted in many false positives. Finally, other potential solutions required manual operation, leaving the process at the mercy of human memory.</p><p>To solve our problem (and ultimately, improve our SEO), we needed an automated tool that could discover and notify us of new instances of targeted phrases on a specified range of pages.</p>
    <div>
      <h2>How it works</h2>
      <a href="#how-it-works">
        
      </a>
    </div>
    
    <div>
      <h3>Data model</h3>
      <a href="#data-model">
        
      </a>
    </div>
    <p>First, let's explore the data model for Prospector. We have two main tables: <code>notifiers</code> and <code>urls</code>. The <code>notifiers</code> table stores the email address and keyword that we want to monitor. The <code>urls</code> table stores the URL and sitemap that we want to scrape. The <code>notifiers</code> table has a one-to-many relationship with the <code>urls</code> table, meaning that each notifier can have many URLs associated with it.</p><p>In addition, we have a <code>sitemaps</code> table that stores the sitemap URLs that we've scraped. Many larger websites don't just have a single sitemap: the Cloudflare blog, for instance, has a primary sitemap that contains four sub-sitemaps. When the application is deployed, a primary sitemap is provided as configuration, and Prospector will parse it to find all of the sub-sitemaps.</p><p>Finally, <code>notifier_matches</code> is a table that stores the matches between a notifier and a URL. This allows us to keep track of which URLs have already been matched, and which ones still need to be processed. When a match has been found, the <code>notifier_matches</code> table is updated to reflect that, and "matches" for a keyword are no longer processed. This saves our SEO experts from a crowded inbox, and allows them to focus and act on new matches.</p><p><b>Connecting the pieces with Cloudflare Queues</b>Cloudflare Queues acts as the work queue for Prospector. When a new notifier is added, a new job is created for it and added to the queue. Behind the scenes, Queues will distribute the work across multiple Workers, allowing us to scale the application as needed. When a job is processed, Prospector will scrape the URL and check for matches. If a match is found, Prospector will send an email to the notifier's email address.</p><p>Using the Cron Triggers functionality in Workers, we can schedule the scraping process to run at a regular interval - by default, once a day. This allows us to keep our data up-to-date, and ensures that we're always notified of any changes to our website. It also allows the end-user to configure when they receive emails in case they want to receive them more or less frequently, or at the beginning of their workday.</p><p>The Module Workers syntax for Workers makes accessing the application bindings - the constants available in the application for querying D1, Queues, and other services - incredibly easy. <code>src/index.ts</code>, the entrypoint for the application, looks like this:</p>
            <pre><code>import { DBUrl, Env } from './types'

import {
  handleQueuedUrl,
  scheduled,
} from './functions';

import h from './api'

export default {
  async fetch(
	request: Request,
	env: Env,
	ctx: ExecutionContext
  ): Promise&lt;Response&gt; {
	return h.fetch(request, env, ctx)
  },

  async queue(
	batch: MessageBatch&lt;Error&gt;,
	env: Env
  ): Promise&lt;void&gt; {
	for (const message of batch.messages) {
  	const url: DBUrl = JSON.parse(message.body)
  	await handleQueuedUrl(url, env.DB)
	}
  },

  async scheduled(
	env: Env,
  ): Promise&lt;void&gt; {
	await scheduled({
  	authToken: env.AUTH_TOKEN,
  	db: env.DB,
  	queue: env.QUEUE,
  	sitemapUrl: env.SITEMAP_URL,
	})
  }
};</code></pre>
            <p>With this syntax, we can see where the various events incoming to the application - the <code>fetch</code> event, the <code>queue</code> event, and the <code>scheduled</code> event - are handled. The <code>fetch</code> event is the main entrypoint for the application, and is where we handle all of the API routes. The <code>queue</code> event is where we handle the work that's been added to the queue, and the <code>scheduled</code> event is where we handle the scheduled scraping process.</p><p>Central to the application, of course, is Workers - acting as the API gateway and coordinator. We've elected to use the popular open-source framework <a href="https://honojs.dev/">Hono</a>, an Express-style API for Workers, in Prospector. With Hono, we can quickly map out a REST API in just a few lines of code. Here's an example of a few API routes and how they're defined with Hono:</p>
            <pre><code>const app = new Hono()

app.get("/", (context) =&gt; {
  return context.html(index)
})

app.post("/notifiers", async context =&gt; {
  try {
	const { keyword, email } = await context.req.parseBody()
	await context.env.DB.prepare(
  	"insert into notifiers (keyword, email) values (?, ?)"
	).bind(keyword, email).run()
	return context.redirect('/')
  } catch (err) {
	context.status(500)
	return context.text("Something went wrong")
  }
})

app.get('/sitemaps', async (context) =&gt; {
  const query = await context.env.DB.prepare(
	"select * from sitemaps"
  ).all();
  const sitemaps: Array&lt;DBSitemap&gt; = query.results
  return context.json(sitemaps)
})</code></pre>
            <p>Crucial to the development of Prospector are the improved TypeScript bindings for Workers. <a href="/improving-workers-types/">As announced in November of last year</a>, TypeScript bindings for Workers are now automatically generated based on <a href="/workerd-open-source-workers-runtime/">our open source runtime, <code>workerd</code></a>. This means that whenever we use the types provided from the <a href="https://github.com/cloudflare/workers-types"><code>@cloudflare/workers-types</code> package</a> in our application, we can be sure that the types are always up-to-date.</p><p>With these bindings, we can define the types for our environment variables, and use them in our application. Here's an example of the <code>Env</code> type, which defines the environment variables that we use in the application:</p>
            <pre><code>export interface Env {
  AUTH_TOKEN: string
  DB: D1Database
  QUEUE: Queue
  SITEMAP_URL: string
}</code></pre>
            <p>Notice the types of the <code>DB</code> and <code>QUEUE</code> bindings - <code>D1Database</code> and <code>Queue</code>, respectively. These types are automatically generated, complete with type signatures for each method inside of the D1 and Queue APIs. This means that we can be sure that we're using the correct methods, and that we're passing the correct arguments to them, directly from our text editor - without having to refer to the documentation.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6uqW1v9MEpdsEMieihxKFk/e865da1397167301f051616d61f83a1a/image4.png" />
            
            </figure>
    <div>
      <h2>How to use it</h2>
      <a href="#how-to-use-it">
        
      </a>
    </div>
    <p>One of my favorite things about Workers is that deploying applications is quick and easy. Using `wrangler.toml` and some simple build scripts, we can deploy a fully-functional application in just a few minutes. Prospector is no different. With just a few commands, we can create the necessary D1 database and Queues instance, and deploy the application to our account.</p><p>First, you'll need to clone the repository from our cloudflare/templates repository:</p><p><code>git clone $URL</code></p><p>If you haven't installed wrangler yet, you can do so by running:</p><p><code>npm install @cloudflare/wrangler -g</code></p><p>With Wrangler installed, you can login to your account by running:</p><p><code>wrangler login</code></p><p>After you've done that, you'll need to create a new D1 database, as well as a Queues instance. You can do this by running the following commands:</p><p><code>wrangler d1 create $DATABASE_NAMEwrangler queues create $QUEUE_NAME</code></p><p>Configure your <code>wrangler.toml</code> with the appropriate bindings (see [the README](URL) for an example):</p>
            <pre><code>[[ d1_databases ]]
binding = "DB"
database_name = "keyword-tracker-db"
database_id = "ab4828aa-723b-4a77-a3f2-a2e6a21c4f87"
preview_database_id = "8a77a074-8631-48ca-ba41-a00d0206de32"
	
[[queues.producers]]
  queue = "queue"
  binding = "QUEUE"

[[queues.consumers]]
  queue = "queue"
  max_batch_size = 10
  max_batch_timeout = 30
  max_retries = 10
  dead_letter_queue = "queue-dlq"</code></pre>
            <p>Next, you can run the <code>bin/migrate</code> script to create the tables in your database:</p><p><code>bin/migrate</code></p><p>This will create all the needed tables in your database, both in development (locally) and in production. Note that you'll even see the creation of a honest-to-goodness <code>.sqlite3</code> file in your project directory - this is the local development database, which you can connect to directly using the same SQLite CLI that you're used to:</p><p><code>$ sqlite3 .wrangler/state/d1/DB.sqlite3sqlite&gt; .tables notifier_matches  notifiers      sitemaps       urls</code></p><p>Finally, you can deploy the application to your account:</p><p><code>npm run deploy</code></p><p>With a deployed application, you can visit your Workers URL to see the user interface. From there, you can add new notifiers and URLs, and see the results of your scraping process. When a new keyword match is found, you’ll receive an email with the details of the match instantly:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2P59U5wQRoysgE8nWLbwLD/b1e7240ddd90dd36676163b201998cb3/image2-1.png" />
            
            </figure>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>For some time, there have been a great deal of applications that were hard to build on Workers without relational data or background task tooling. Now, with D1 and Queues, we can build applications that seamlessly integrate between real-time user interfaces, geographically distributed data, background processing, and more, all using the same developer ergonomics and low latency that Workers is known for.</p><p>D1 has been crucial for building this application. On larger sites, the number of URLs that need to be scraped can be quite large. If we were to use Workers KV, our key-value store, for storing this data, we would quickly struggle with how to model, retrieve, and update the data needed for this use-case. With D1, we can build relational data models and quickly query <i>just</i> the data we need for each queued processing task.</p><p>Using these tools, developers can build internal tools and applications for their companies that are more powerful and more scalable than ever before. With the integration of Cloudflare's Zero Trust suite, developers can make these applications secure by default, and deploy them to Cloudflare's global network. This allows developers to build applications that are fast, secure, and reliable, all without having to worry about the underlying infrastructure.</p><p>Prospector is a great example of how easy it is to build applications on Cloudflare Workers. With the recent addition of D1 and Queues, we've been able to build fully-functional applications that require real-time data and background processing in just a few hours. We're excited to share the open-source code for Prospector, and we'd love to hear your feedback on the project.</p><p>If you have any questions, feel free to reach out to us on Twitter at <a href="https://twitter.com/cloudflaredev">@cloudflaredev</a>, or join us in the Cloudflare Workers Discord community, which recently hit 20k members and is a great place to ask questions and get help from other developers.</p> ]]></content:encoded>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Storage]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Queues]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">3Ye7OiZdwDby0AGqA7LQAh</guid>
            <dc:creator>Kristian Freeman</dc:creator>
            <dc:creator>Neal Kindschi</dc:creator>
        </item>
        <item>
            <title><![CDATA[Iteration isn't just for code: here are our latest API docs]]></title>
            <link>https://blog.cloudflare.com/building-a-better-developer-experience-through-api-documentation/</link>
            <pubDate>Wed, 16 Nov 2022 14:00:00 GMT</pubDate>
            <description><![CDATA[ We’re excited to share that the next iteration of Cloudflare’s API reference documentation is now available. The new docs standardize our API content and improve the overall developer experience for interacting with Cloudflare’s API. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Mes42U4gOEy3piPQwOhWs/b4a3515cbd34af478b2cff4e09d933e8/image4-18.png" />
            
            </figure><p>We’re excited to share that the next iteration of <a href="https://developers.cloudflare.com/api">Cloudflare’s API reference documentation</a> is now available. The new docs standardize our API content and improve the overall developer experience for interacting with Cloudflare’s API.</p>
    <div>
      <h3>Why does API documentation matter?</h3>
      <a href="#why-does-api-documentation-matter">
        
      </a>
    </div>
    <p>Everyone talks about how important APIs are, but not everyone acknowledges the critical role that API documentation plays in an API’s usability. Throwing docs together is easy. Getting them right is harder.</p><p>At Cloudflare, we try to meet our users where they are. For the majority of customers, that means providing clear, easy-to-use products in our dashboard. But developers don’t always want what our dashboard provides. Some developers prefer to use a CLI or Wrangler to have a higher level of control over what’s happening with their Cloudflare products. Others want more customization and deeper ties into their company’s internal applications. Some want all the above.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4maPLOXhOJxHYxOvGZd2qz/dce4fa3c0bd9f128cbd7c09e5682fa18/image5-9.png" />
            
            </figure><p>A developer’s job is to create, debug, and optimize code - whether that’s an application, interface, database, etc. - as efficiently as possible and ensure that code runs as efficiently as possible. <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api/">APIs</a> enable that efficiency through automation. Let’s say a developer wants to run a cache purge every time content is updated on their website. They could use Cloudflare’s dashboard to enable cache purge, but they might want it to happen automatically instead. Enter Cloudflare’s API.</p><p>In the same way that the Cloudflare dashboard is an interface for humans, an API is an interface for computers. For a computer to execute on instructions, there is no room for interpretation. The instructions, formatted as requests, have to follow a specific set of rules and include certain requirements. API documentation details what those rules and requirements are.</p><p>It’s a frustrating experience for developers when you’re in the details of a complex project and can’t troubleshoot an error because the docs aren’t comprehensive or don’t load. Unfortunately, that’s been the reality for developers using Cloudflare’s API. Figuring out how to use Cloudflare’s API was a “choose your own adventure” story for the Cloudflare users who made 126 million visits monthly to our API documentation. If the APIs you needed were fully documented, you encountered long page load times and a site that couldn’t render on mobile.</p><p>From a technical standpoint, we were using api.cloudflare.com for both the API documentation site and the Cloudflare API access point, which is awkward. We needed a more sustainable way for internal teams to create and document their APIs according to an accepted standard.</p>
    <div>
      <h3>Building a better developer experience for our APIs</h3>
      <a href="#building-a-better-developer-experience-for-our-apis">
        
      </a>
    </div>
    <p>Like with all of our documentation projects at Cloudflare, we started by thinking about the user’s workflow - in this case, a developer’s workflow as they’re getting started with the Cloudflare API.</p><p><b>Providing docs on mobile</b>We know developers research products before diving into projects, usually exploring API documentation before writing any code. That means making docs available on mobile for working on the go. Check.</p><p><b>Improving the navigation</b>As a developer, at the point you’re looking at API docs, you generally have an idea of what you want to use the API for. Our goal with the new API docs is to make it as easy as possible for you to find the information you need. We recognize that endpoints are organized a bit haphazardly in the current API docs. To address that clunkiness, we’ve grouped endpoints by product or setting and alphabetized that list on the new site, making it easier to navigate and find what you’re looking for. And while this may seem like a small thing, you now can use a keyboard shortcut to search the page.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2zxxPTNKamMqhTM53c5S4T/bf6f15b6117853d64fc8ad2cc9a1321e/image6-6.png" />
            
            </figure><p><b>Clarifying authentication</b>Once developers start writing code, you first have to handle authentication to start using the API. Authentication information is now readily available for every endpoint, and we link to the <a href="https://developers.cloudflare.com/fundamentals/api/get-started/">full developer docs about authentication</a> from the API overview page.</p><p><b>Adding examples</b>Good API docs have clear descriptions, but the best API docs share templates and examples to minimize a developer’s friction to deploying code. Every endpoint now includes an example and clearly indicates which parameters are required, removing yet another decision developers have to make when working with our API.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1YBLnCWZAaa8qZhr7KW3sh/9bf3db93a77aa24cf522e051354abb02/image2-34.png" />
            
            </figure><p><b>Gathering feedback and iterating</b>As we implemented all of these improvements, we kept some of our biggest users - our internal developers - informed along the way. We shared the test site early and often to get internal feedback, which caught bugs and helped us refine the site’s usability. The Discord and Community MVPs also volunteered to test the site, giving us valuable outside perspective on what we built. We incorporated all of that feedback to provide a vetted, deliberate UX at launch.</p>
    <div>
      <h3>How we built this</h3>
      <a href="#how-we-built-this">
        
      </a>
    </div>
    <p>Since this week is all about what you can build on our developer platform, we wanted to share the details about how this all works under the hood (hint: it’s mostly Workers).</p><p>We used a combination of open-source tools and Cloudflare products to revamp the API doc site. Previously the content on api.cloudflare.com was sourced from the JSON hyper-schema files that described our APIs, but over the years we repeatedly heard that published schemas would help you better integrate with Cloudflare. Several Cloudflare engineering teams started adopting the <a href="https://www.openapis.org/">OpenAPI specification</a>, and with a little research, planning, and testing, we pivoted from those JSON hyper-schemas to the OpenAPI framework. Check out the blog post about our <a href="/open-api-transition/">Transition to OpenAPI</a> for more details.</p><p>Because the OpenAPI specification defines how to describe APIs, our schema files now have consistency - not just among the various products, but also with an industry-accepted standard. Hundreds of documentation and code generation tools exist to pull from that standard, which means we have options that weren’t available for our homegrown JSON hyper-schemas.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7s01L8GzGtGQGhjN2lV0Xp/b8b82197cc1d9c1c2a44279508c4ec2e/image3-25.png" />
            
            </figure><p>We chose <a href="https://github.com/stoplightio/elements">Stoplight Elements</a>, an open-source React framework, for our site design because it has a clean layout and is easily customizable. While there are a number of both dynamic and static tools for parsing OpenAPI schemas and rendering documentation sites, we chose React because of its ubiquity, performance, and because it plays well with Cloudflare Pages, our deployment tool of choice. Within Stoplight you can generate code samples for a variety of  programming languages, like JavaScript, Java, and Python, as well as for tools like cURL, HTTPie, and wget.</p><p>We build and deploy the React application with <a href="https://pages.cloudflare.com/">Cloudflare Pages</a>, and using <a href="https://developers.cloudflare.com/pages/platform/functions/">Pages functions</a>, we optimize the OpenAPI schema file for Stoplight’s UI and cache it on Cloudflare’s network, reducing the latency needed to request the schema. Whenever teams add new API endpoints or definitions to the schema, we just update the schema file in GitHub. Because our API documentation loads the schema dynamically, this means we don’t have to wait for Cloudflare Pages to deploy a new version of the documentation site. The automation helps us ensure that everything we expose in the API is documented without manual interaction. Deploying with Pages gives us yet another opportunity to test our own products out on ourselves, helping us find areas for improvement that we turn around and pass along to you.</p>
    <div>
      <h3>Looking ahead</h3>
      <a href="#looking-ahead">
        
      </a>
    </div>
    <p>Moving our schemas to the OpenAPI specification allows us to use ready-to-go tools like Stoplight, but we’re just getting started. Soon we’ll be adding search functionality, letting you access all relevant information across developer, API, and support docs. The mobile experience will evolve, providing an even cleaner way to read API content when you’re on your phone. We’ll also add a “try it out” functionality to test out your requests right in the browser before writing your own code.</p><p>Over time, we want an even tighter integration between the API reference documentation, our how-to content in developer docs, and the Cloudflare dashboard. The standardized structure we get with the OpenAPI spec sets us up to reuse the schema files across our client libraries, Terraform, <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api-gateway/">API gateway</a>, and Trakal. The consistency across API descriptions makes it easier for us to do things like localize API content, customize out-of-the-box documentation generation tools, and continue innovating like we always do.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/qgE1KAtlYwqWlrTXoVToz/8af187efed650ce369aff7fbda1a08fb/image1-41.png" />
            
            </figure>
    <div>
      <h3>We want to hear from you!</h3>
      <a href="#we-want-to-hear-from-you">
        
      </a>
    </div>
    <p>As you’re using the new <a href="https://developers.cloudflare.com/api">API doc site</a>, send us your <a href="https://docs.google.com/forms/d/e/1FAIpQLSfu46XUqWbabzDOhkIn5XCu7Wqufng_ouC4QB3qxScPge1ttA/viewform">feedback</a> and let us know if there are any feature improvements you’d like to see in the future. We want the site to be as useful as possible for your day-to-day interactions with Cloudflare’s API. We hope the improvements to our API doc experience will help you seamlessly use our API and efficiently deploy and maintain your Cloudflare products.</p><p>Thanks for your patience and perspective as we iterate on the API docs!</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developer Documentation]]></category>
            <category><![CDATA[Technical Writing]]></category>
            <guid isPermaLink="false">2nISxLF3OKcbkhUcgOS2Kq</guid>
            <dc:creator>Claire Waters</dc:creator>
            <dc:creator>Kristian Freeman</dc:creator>
        </item>
        <item>
            <title><![CDATA[Making static sites dynamic with Cloudflare D1]]></title>
            <link>https://blog.cloudflare.com/making-static-sites-dynamic-with-cloudflare-d1/</link>
            <pubDate>Wed, 16 Nov 2022 14:00:00 GMT</pubDate>
            <description><![CDATA[ In this blog post, I'll show you how to use D1 to add comments to a static blog site. To do this, we'll construct a new D1 database and build a simple JSON API that allows the creation and retrieval of comments. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4EtdDhbUmqO6pJ0onYlD80/6c88d9a3b9189dac7ead2fb45c2450f0/image1-40.png" />
            
            </figure>
    <div>
      <h3>Introduction</h3>
      <a href="#introduction">
        
      </a>
    </div>
    <p>There are many ways to store data in your applications. For example, in Cloudflare Workers applications, we have Workers KV for key-value storage and Durable Objects for real-time, coordinated storage without compromising on consistency. Outside the Cloudflare ecosystem, you can also plug in other tools like NoSQL and graph databases.</p><p>But sometimes, you want SQL. Indexes allow us to retrieve data quickly. Joins enable us to describe complex relationships between different tables. SQL declaratively describes how our application's data is validated, created, and performantly queried.</p><p><a href="/d1-open-alpha">D1 was released today in open alpha</a>, and to celebrate, I want to share my experience building apps with D1: specifically, how to get started, and why I’m excited about D1 joining the long list of tools you can use to build apps on Cloudflare.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1T6IVlTeQ59oZQ2M6auyey/ea995b570a3dfb93c2c1d466d75f4524/image3-24.png" />
            
            </figure><p><a href="https://www.cloudflare.com/developer-platform/products/d1/">D1</a> is remarkable because it's an instant value-add to applications without needing new tools or stepping out of the Cloudflare ecosystem. Using wrangler, we can do local development on our Workers applications, and with the addition of D1 in wrangler, we can now develop proper stateful applications locally as well. Then, when it's time to deploy the application, wrangler allows us to both access and execute commands to your D1 database, as well as your API itself.</p>
    <div>
      <h3>What we’re building</h3>
      <a href="#what-were-building">
        
      </a>
    </div>
    <p>In this blog post, I'll show you how to use D1 to add comments to a static blog site. To do this, we'll construct a new D1 database and build a simple JSON API that allows the creation and retrieval of comments.</p><p>As I mentioned, separating D1 from the app itself - an API and database that remains separate from the static site - allows us to abstract the static and dynamic pieces of our website from each other. It also makes it easier to deploy our application: we will deploy the frontend to Cloudflare Pages, and the D1-powered API to Cloudflare Workers.</p>
    <div>
      <h3>Building a new application</h3>
      <a href="#building-a-new-application">
        
      </a>
    </div>
    <p>First, we'll add a basic <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api/">API</a> in Workers. Create a new directory and in it a new wrangler project inside it:</p>
            <pre><code>$ mkdir d1-example &amp;&amp; d1-example
$ wrangler init</code></pre>
            <p>In this example, we’ll use Hono, an Express.js-style framework, to rapidly build our API. To use Hono in this project, install it using NPM:</p>
            <pre><code>$ npm install hono</code></pre>
            <p>Then, in <code>src/index.ts</code>, we’ll initialize a new Hono app, and define a few endpoints - GET /API/posts/:slug/comments, and POST /get/api/:slug/comments.</p>
            <pre><code>import { Hono } from 'hono'
import { cors } from 'hono/cors'

const app = new Hono()

app.get('/api/posts/:slug/comments', async c =&gt; {
  // do something
})

app.post('/api/posts/:slug/comments', async c =&gt; {
  // do something
})

export default app</code></pre>
            <p>Now we'll create a D1 database. In Wrangler 2, there is support for the <code>wrangler d1</code> subcommand, which allows you to create and query your D1 databases directly from the command line. So, for example, we can create a new database with a single command:</p>
            <pre><code>$ wrangler d1 create d1-example</code></pre>
            <p>With our created database, we can take the database name ID and associate it with a <b>binding</b> inside of wrangler.toml, wrangler's configuration file. Bindings allow us to access Cloudflare resources, like D1 databases, KV namespaces, and R2 buckets, using a simple variable name in our code. Below, we’ll create the binding <code>DB</code> and use it to represent our new database:</p>
            <pre><code>[[ d1_databases ]]
binding = "DB" # i.e. available in your Worker on env.DB
database_name = "d1-example"
database_id = "4e1c28a9-90e4-41da-8b4b-6cf36e5abb29"</code></pre>
            <p>Note that this directive, the <code>[[d1_databases]]</code> field, currently requires a beta version of wrangler. You can install this for your project using the command <code>npm install -D wrangler/beta</code>.</p><p>With the database configured in our wrangler.toml, we can start interacting with it from the command line and inside our Workers function.</p><p>First, you can issue direct SQL commands using <code>wrangler d1 execute</code>:</p>
            <pre><code>$ wrangler d1 execute d1-example --command "SELECT name FROM sqlite_schema WHERE type ='table'"
Executing on d1-example:
┌─────────────────┐
│ name │
├─────────────────┤
│ sqlite_sequence │
└─────────────────┘</code></pre>
            <p>You can also pass a SQL file - perfect for initial data seeding in a single command. Create <code>src/schema.sql</code>, which will create a new <code>comments</code> table for our project:</p>
            <pre><code>drop table if exists comments;
create table comments (
  id integer primary key autoincrement,
  author text not null,
  body text not null,
  post_slug text not null
);
create index idx_comments_post_id on comments (post_slug);

-- Optionally, uncomment the below query to create data

-- insert into comments (author, body, post_slug)
-- values ("Kristian", "Great post!", "hello-world");</code></pre>
            <p>With the file created, execute the schema file against the D1 database by passing it with the flag <code>--file</code>:</p>
            <pre><code>$ wrangler d1 execute d1-example --file src/schema.sql</code></pre>
            <p>We've created a SQL database with just a few commands and seeded it with initial data. Now we can add a route to our Workers function to retrieve data from that database. Based on our wrangler.toml config, the D1 database is now accessible via the <code>DB</code> binding. In our code, we can use the binding to prepare SQL statements and execute them, for instance, to retrieve comments:</p>
            <pre><code>app.get('/api/posts/:slug/comments', async c =&gt; {
  const { slug } = c.req.param()
  const { results } = await c.env.DB.prepare(`
    select * from comments where post_slug = ?
  `).bind(slug).all()
  return c.json(results)
})</code></pre>
            <p>In this function, we accept a <code>slug</code> URL query parameter and set up a new SQL statement where we select all comments with a matching <code>post_slug</code> value to our query parameter. We can then return it as a simple JSON response.</p><p>So far, we've built read-only access to our data. But "inserting" values to SQL is, of course, possible as well. So let's define another function that allows POST-ing to an endpoint to create a new comment:</p>
            <pre><code>app.post('/API/posts/:slug/comments', async c =&gt; {
  const { slug } = c.req.param()
  const { author, body } = await c.req.json&lt;Comment&gt;()

  if (!author) return c.text("Missing author value for new comment")
  if (!body) return c.text("Missing body value for new comment")

  const { success } = await c.env.DB.prepare(`
    insert into comments (author, body, post_slug) values (?, ?, ?)
  `).bind(author, body, slug).run()

  if (success) {
    c.status(201)
    return c.text("Created")
  } else {
    c.status(500)
    return c.text("Something went wrong")
  }
})</code></pre>
            <p>In this example, we built a comments API for powering a blog. To see the source for this D1-powered comments API, you can visit <a href="https://github.com/cloudflare/templates/tree/main/worker-d1-api">cloudflare/templates/worker-d1-api</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Bbc7exdfzvFnV47Btu7Gn/362b947983416c62e0b9670417e1babb/image2-31.png" />
            
            </figure>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>One of the things most exciting about D1 is the opportunity to augment existing applications or websites with dynamic, relational data. As a former Ruby on Rails developer, one of the things I miss most about that framework in the world of JavaScript and serverless development tools is the ability to rapidly spin up full data-driven applications without needing to be an expert in managing database infrastructure. With D1 and its easy onramp to SQL-based data, we can build true data-driven applications without compromising on performance or developer experience.</p><p>This shift corresponds nicely with the advent of static sites in the last few years, using tools like Hugo or Gatsby. A blog built with a static site generator like Hugo is incredibly performant - it will build in seconds with small asset sizes.</p><p>But by trading a tool like WordPress for a static site generator, you lose the opportunity to add dynamic information to your site. Many developers have patched over this problem by adding more complexity to their build processes: fetching and retrieving data and generating pages using that data as part of the build.</p><p>This addition of complexity in the build process attempts to fix the lack of dynamism in applications, but it still isn't genuinely dynamic. Instead of being able to retrieve and display new data as it's created, the application rebuilds and redeploys whenever data changes so that it appears to be a live, dynamic representation of data. Your application can remain static, and the dynamic data will live geographically close to the users of your site, accessible via a queryable and expressive API.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Storage]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">42d4M0F5dhHm6ImRHKTL7Z</guid>
            <dc:creator>Kristian Freeman</dc:creator>
        </item>
        <item>
            <title><![CDATA[We rebuilt Cloudflare's developer documentation - here's what we learned]]></title>
            <link>https://blog.cloudflare.com/new-dev-docs/</link>
            <pubDate>Fri, 27 May 2022 12:55:54 GMT</pubDate>
            <description><![CDATA[ In this blog post, we’ll cover the history of Cloudflare’s developer docs, why we made this recent transition, and why we continue to dogfood Cloudflare’s products as we develop applications internally ]]></description>
            <content:encoded><![CDATA[ <p></p><p>We recently updated <code>developers.cloudflare.com</code>, the Cloudflare Developers documentation website, to a new version of our custom documentation engine. This change consisted of a significant migration from Gatsby to Hugo and converged a collection of Workers Sites into a single Cloudflare Pages instance. Together, these updates brought developer experience, performance, and quality of life improvements for our engineers, technical writers, and product managers.</p><p>In this blog post, we’ll cover the history of Cloudflare’s developer docs, why we made this recent transition, and why we continue to <a href="https://en.wikipedia.org/wiki/Eating_your_own_dog_food">dogfood</a> Cloudflare’s products as we develop applications internally.</p>
    <div>
      <h3>What are Cloudflare’s Developer Docs?</h3>
      <a href="#what-are-cloudflares-developer-docs">
        
      </a>
    </div>
    <p>Cloudflare’s Developer Docs, which are <a href="https://github.com/cloudflare/cloudflare-docs/">open source on GitHub</a>, comprise documentation for all of Cloudflare’s products. The documentation is written by technical writers, product managers, and engineers at Cloudflare. Like many open source projects, contributions to the docs happen via Pull Requests (PRs). At time of writing, we have 1,600 documentation pages and have accepted almost 4,000 PRs, both from Cloudflare employees and external contributors in our community.</p><p>The underlying documentation engine we’ve used to build these docs has changed multiple times over the years. Documentation sites are often built with static site generators and, at Cloudflare, we’ve used tools like Hugo and Gatsby to <a href="https://blog.cloudflare.com/markdown-for-agents/">convert thousands of Markdown pages</a> into HTML, CSS, and JavaScript.</p><p>When we released the first version of our Docs Engine in mid-2020, we were excited about the facelift to our Developer Documentation site and the inclusion of dev-friendly features like dark mode and proper code syntax highlighting.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5UuLEAsx9i6dC7g3ai0LA9/b564d7bf89af2483abcf0f521f0a353f/image1-60.png" />
            
            </figure><p>Most importantly, we also used this engine to transition <i>all</i> of Cloudflare’s products with documentation onto a single engine. This allowed all Cloudflare product documentation to be developed, built, and deployed using the same core foundation. But over the next eighteen months and thousands of PRs, we realized that many of the architecture decisions we had made were not scaling.</p><p>While the user interface that we had made for navigating the documentation continued to receive great feedback from our users and product teams, decisions like using client-side rendering for docs had performance implications, especially on resource-constrained devices.</p><p>At the time, our decision to dogfood Workers Sites — which served as a precursor to Cloudflare Pages — meant that we could rapidly deploy our documentation across all of Cloudflare’s network in a matter of minutes. We implemented this by creating a separate Cloudflare Workers deployment for each product’s staging and production instances. Effectively, this meant that more than a hundred Workers were regularly updated, which caused significant headaches when trying to understand the causes and downstream effects of any failed deployments.</p><p>Finally, we struggled with our choice of underlying static site generator, Gatsby. We still think Gatsby is a great tool of choice for certain websites and applications, but we quickly found it to be the wrong match for our content-heavy documentation experience. Gatsby inherits many dependency chains to provide its featureset, but running the dependency-heavy toolchain locally on contributors’ machines proved to be an incredibly difficult and slow task for many of our documentation contributors.</p><p>When we did get to the point of deploying new docs changes, we began to be at the mercy of Gatsby’s long build times – in the worst case, almost an entire hour – just to compile Markdown and images into HTML. This negatively impacted our team’s ability to work quickly and efficiently as they improved our documentation. Ultimately, we were unable to find solutions to many of these issues, as they were core assumptions and characteristics of the underlying tools that we had chosen to build on — it was time for something new.</p><p>Built using Go, <a href="https://gohugo.io">Hugo</a> is incredibly fast at building large sites, has an active community, and is easily installable on a variety of operating systems. In our early discovery work, we found that Hugo would build our docs content in mere seconds. Since performance was a core motive for pursuing a rewrite, this was a significant factor in our decision.</p>
    <div>
      <h3>How we migrated</h3>
      <a href="#how-we-migrated">
        
      </a>
    </div>
    <p>When comparing frameworks, the most significant difference between Hugo and Gatsby – <i>from a user’s standpoint</i> – is the allowable contents of the Markdown files themselves. For example, Gatsby makes heavy use of <a href="https://mdxjs.com/">MDX</a>, allowing developers to author and import React components within their content pages. While this can be effective, MDX unfortunately is not CommonMark-compliant and, in turn, this means that its flavor of Markdown is <i>required</i> to be very flexible and permissive. This posed a problem when migrating to any other non-MDX-based solution, including Hugo, as these frameworks don’t grant the same flexibilities with Markdown syntax. Because of this, the largest technical challenge was converting the existing 1,600 markdown pages from MDX to a stricter, more standard Markdown variant that Hugo (or almost any framework) can interpret.</p><p>Not only did we have to convert 1,600 Markdown pages so that they’re rendered correctly by the new framework, we had to make these changes in a way that minimized the number of merge conflicts for when the migration itself was ready for deployment. There was a lot of work to be done as part of this migration – and work takes time! We could not stall or block the update cycles of the Developer Documentation repository, so we had to find a way to rename or modify <i>every single file</i> in the repository without gridlocking deployments for weeks.</p><p>The only way to solve this was through automation. We created <a href="https://github.com/cloudflare/cloudflare-docs/pull/3609/commits/2b16cd220f79c7cfd64d80f4a4592b73abcf0753">a migration script</a> that would apply all the necessary changes on the morning of the migration release day. Of course, this meant that we had to identify and apply the changes manually and then record that in JavaScript or Bash commands to make sweeping changes for the entire project.</p><p>For example, when migrating Markdown content, the migrator needs to take the file contents and parse them into an abstract syntax tree (AST) so that other functions can access, traverse, and modify a collection of standardized objects <i>representing</i> the content instead of resorting to a sequence string manipulations… which is scary and error-prone.</p><p>Since the project started with MDX, we needed a MDX-compatible parser which, in turn, produces its own AST with its own object standardizations. From there, one can “walk” – aka traverse – through the AST and add, remove, and/or edit objects and object properties. With the updated AST and a final traversal, a “stringifier” function can convert each object representation back to its string representation, producing updated file contents that differ from the original.</p><p>Below is an example snippet that utilizes <a href="https://www.npmjs.com/package/mdast-util-from-markdown"><code>mdast-util-from-markdown</code></a> and <a href="https://www.npmjs.com/package/mdast-util-to-markdown"><code>mdast-util-to-markdown</code></a> to create and stringify, respectively, the MDX AST and <a href="https://github.com/lukeed/astray"><code>astray</code></a> to traverse the AST with our custom modifications. For this example, we’re looking for <code>heading</code> and <code>anchor</code> nodes – both names are provided by the <code>mdast-*</code> utilities – so that we can read the primary header () text and ensure that all internal Developer Documentation links are consistent:</p>
            <pre><code>import * as fs from 'fs';
import * as astray from 'astray';
import { toMarkdown } from 'mdast-util-to-markdown';
import { fromMarkdown } from 'mdast-util-from-markdown';

/**
 * @param {string} file The "*.md" file path.
 */
export async function modify(file) {
  let content = await fs.promises.read(file, 'utf8');
  let AST = fromMarkdown(content);
  let title = '';

  astray.walk(AST, {
    /**
     * Read the page's &lt;h1&gt; to determine page's title.
     */
    heading(node) {
      // ignore if not &lt;h1&gt; header
      if (node.depth !== 1) return;

      astray.walk(node, {
        text(t: MDAST.Text) {
          // Grab the text value of the H1
          title += t.value;
        },
      });

      return astray.SKIP;
    },
    
    /**
     * Update all anchor links (&lt;a&gt;) for consistent internal linking.
     */
    link(node) {
      let value = node.url;
      
      // Ignore section header links (same page)
      if (value.startsWith('#')) return;

      if (/^(https?:)?\/\//.test(value)) {
        let tmp = new URL(value);
        // Rewrite our own "https://developers.cloudflare.com" links
        // so that they are absolute, path-based links instead.
        if (tmp.origin === 'https://developers.cloudflare.com') {
          value = tmp.pathname + tmp.search + tmp.hash;
        }
      }
      
      // ... other normalization logic ...
      
      // Update the link's `href` value
      node.url = value;
    }
  });
  
  // Now the AST has been modified in place.
  // AKA, the same `AST` variable is (or may be) different than before.
  
  // Convert the AST back to a final string.
  let updated = toMarkdown(AST);
  
  // Write the updated markdown file
  await fs.promises.writeFile(file, updated);
}</code></pre>
            <p><a href="https://gist.github.com/lukeed/d63a4561ce9859765d8f0e518b941642#file-cfblog-devdocs-0-js">https://gist.github.com/lukeed/d63a4561ce9859765d8f0e518b941642#file-cfblog-devdocs-0-js</a></p><p>The above is an abbreviated snippet of the modifications we needed to make during our migration. You may find all the AST traversals and manipulations we created as part of our migration <a href="https://github.com/cloudflare/cloudflare-docs/blob/2b16cd220f79c7cfd64d80f4a4592b73abcf0753/migrate/normalize.ts">on GitHub</a>.</p><p>We also took this opportunity to analyze the thousands and thousands of code snippets we have throughout the codebase. These serve an important role as they are crucial aides in reference documentation or are presented alongside tutorials as recipes or examples. So we added a <a href="https://github.com/cloudflare/cloudflare-docs/blob/ce64f4d28a6bff7de914d54623046384545e0048/bin/format.ts">code formatter script</a> that utilizes <a href="https://prettier.io/">Prettier</a> to apply a consistent code style across all code snippets. As a bonus side effect, Prettier would throw errors if any snippets had invalid syntax for their given language. Any of these were fixed manually and the `format` script has been added as part of our own CI process to ensure that all JavaScript, TypeScript, Rust, JSON, and/or C++ code we publish is syntactically valid!</p><p>Finally, we <a href="https://github.com/cloudflare/cloudflare-docs/blob/2b16cd220f79c7cfd64d80f4a4592b73abcf0753/migrate/Makefile#L7">created a Makefile</a> that coordinated the series of Node scripts and <code>git</code> commands we needed to make. This orchestrated the entire migration, boiling down all our work into a single <code>make run</code> command.</p><p>In effect, the majority of the <a href="https://github.com/cloudflare/cloudflare-docs/pull/3609">migration Pull Request</a> was the result of automated commits – over one million changes were applied across nearly 5,000 files in less than two minutes. With the help of product owners, we reviewed the newly generated documentation site and applied any fine-tuning adjustments where necessary.</p><p>Previously, with the Gatsby-based Workers Sites architecture, each Cloudflare product needed to be built and deployed as its own individual documentation site. These sites would then be managed and proxied by an umbrella Worker, listening on <code>developers.cloudflare.com</code>, which ensured that all requests were handled by the appropriate product-specific Worker Site. This worked well for our production needs, but made it complicated for contributors to replicate a similar setup during local development. With the move to Hugo, we were able to merge everything into a single project – in other words, 48 moving pieces became 1 single piece! This made it extremely easy to build and develop the entire Developer Docs locally, which is a big confidence booster when working.</p><p>A unified Hugo project also means that there’s only one build command and one deployable unit… This allowed us to move the Developer Docs to Cloudflare Pages! With Pages attached and configured for the GitHub repository, we immediately began making use of <a href="https://developers.cloudflare.com/pages/platform/preview-deployments/">preview deployments</a> as part of our PR review process and our <code>production</code> branch commits automatically queued new production deployments to the live site.</p>
    <div>
      <h3>Why we’re excited</h3>
      <a href="#why-were-excited">
        
      </a>
    </div>
    <p>After all the changes were in place, we ended up with a near-identical replica of the Developer Documentation site. However, upon closer inspection, a number of major improvements had been made:</p><ol><li><p>Our application now has fewer moving pieces for development <i>and</i> deployment, which makes it significantly easier to understand and onboard other contributors and team members.</p></li><li><p>Our development flow is a lot snappier and fully replicated the production behavior. This hugely increased our iteration speed and confidence.</p></li><li><p>Our application was now built as an HTML-first static site. Even though it was always a content site, we are now shipping 90% less JavaScript bytes, which means that our visitors’ browsers are doing less work to view the same content.</p></li></ol><p>The last point speaks to our web pages’ performance, which has real-world implications. These days, websites with faster page-load times are preferred over competitor sites with slower response times. This is true for human and bot users alike! In fact, this is so true that Google now <a href="https://developer.chrome.com/blog/search-ads-speed/">takes page speed into consideration</a> when ranking search results and offers tools like WebMasters and Lighthouse to help site owners track and improve their scores. Below, you can see the before-after comparison of our previous JS-first site to our HTML-first replacement:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1FaGWaH2XlN9riYuoufZAQ/5978df6bf6a6299b62e75f1606058e9d/image2-55.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/60vizdQG71GVdLRHE12TyQ/f832833af15e2bd6200f07a9033a94dd/image3-40.png" />
            
            </figure><p>Here you can see that our <code>Performance</code> grade has significantly improved! It’s this figure, which is a weighted score of the Metrics like First Contentful Paint, that is tied to Page Speed. While this <i>does</i> have SEO impact, the <code>SEO</code> score in a Lighthouse report has to do with Google Crawler’s ability to parse and understand the page’s metadata. This remains unchanged because the content (and its metadata) were not changed as part of the migration.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Developer documentation is incredibly important to the success of any product. At Cloudflare, we believe that technical docs are a product – one that we can continue to iterate on, improve, and make more useful for our customers.</p><p>One of the most effective ways to improve documentation is to make it easier for our writers to contribute to them. With our new Documentation Engine, we’re giving our product content team the ability to validate content faster with instantaneous local builds. Preview links via Cloudflare Pages allows stakeholders like product managers and engineering teams the ability to quickly review what the docs will <i>actually</i> look like in production.</p><p>As we invest more into our build and deployment pipeline, we expect to further develop our ability to validate both the content and technical implementation of docs as part of review – tools like automatic spell checking, link validation, and visual diffs are all things we’d like to explore in the future.</p><p>Importantly, our documentation continues to be 100% open source. If you read Cloudflare’s developer documentation, and have feedback, feel free to <a href="https://github.com/cloudflare/cloudflare-docs/">check out the project on GitHub</a> and submit suggestions!</p> ]]></content:encoded>
            <category><![CDATA[Technical Writing]]></category>
            <category><![CDATA[Developer Documentation]]></category>
            <guid isPermaLink="false">4o47pQSmXUYk7med7nhGAM</guid>
            <dc:creator>Kristian Freeman</dc:creator>
            <dc:creator>Luke Edwards</dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing our Spring Developer Speaker Series]]></title>
            <link>https://blog.cloudflare.com/announcing-our-spring-developer-speaker-series/</link>
            <pubDate>Sun, 08 May 2022 16:59:23 GMT</pubDate>
            <description><![CDATA[ We're excited to announce a new edition of our Developer Speaker Series for 2022! Check out the eleven expert web dev speakers, developers, and educators that we’ve invited to speak live on Cloudflare TV ]]></description>
            <content:encoded><![CDATA[ <p></p><p>We love developers.</p><p>Late last year, we hosted <a href="/full-stack-week-2021/">Full Stack Week</a>, with a focus on new products, features, and partnerships to continue growing Cloudflare’s developer platform. As part of Full Stack Week, we also hosted the Developer Speaker Series, bringing 12 speakers in the web dev community to our 24/7 online TV channel, <a href="https://cloudflare.tv/">Cloudflare TV</a>. The talks covered topics across the web development ecosystem, which you can <a href="https://www.cloudflare.com/developer-speaker-series/">rewatch</a> at any time.</p><p>We loved organizing the Developer Speaker Series last year. But as developers know far too well, our ecosystem changes rapidly: what may have been cutting edge back in November 2021 can be old news just a few months later in 2022. That’s what makes conferences and live speaking events so valuable: they serve as an up-to-date reference of best practices and future-facing developments in the industry. With that in mind, we're excited to announce a new edition of our Developer Speaker Series for 2022!</p><p>Check out the eleven expert web dev speakers, developers, and educators that we’ve invited to speak live on Cloudflare TV! Here are the talks you’ll be able to watch, starting tomorrow morning (May 9 at 09:00 PT):</p><p><b>The Bootcampers Companion</b> – Caitlyn Greffly<i>In her recent book, The Bootcamper's Companion, Caitlyn dives into the specifics of how to build connections in the tech field, understand confusing tech jargon, and make yourself a stand-out candidate when looking for your first job. She'll talk about some top tips and share a bit about her experience as well as what she has learned from navigating tech as a career changer.</i></p><p><b>Engaging Ecommerce with the Visual Web</b> – Colby Fayock<i>Experiences on the web have grown increasingly visual, from displaying product images to interactive NFTs, but not paying attention to how media is delivered can impact Core Web Vitals, creating a bad UX with slow-loading pages, hurting your store’s conversion and potentially losing sales.</i></p><p><i>How can we effectively leverage media to showcase products creating engaging experiences for our store? We’ll talk about the media's role in ecomm and how we can take advantage of it while optimizing delivery.</i></p><p><b>Testing Web Applications with Playwright</b> – Debbie O’Brien<i>Testing is hard, testing takes time to learn and to write, and time is money. As developers, we want to test. We know we should, but we don't have time. So how can we get more developers to do testing? We can create better tools.</i></p><p><i>Let me introduce you to Playwright, a reliable tool for end-to-end cross browser testing for modern web apps, by Microsoft and fully open source. Playwright's codegen generates tests for you in JavaScript, TypeScript, Dot Net, Java or Python. Now you really have no excuses. It's time to play your tests wright.</i></p><p><b>Building serverless APIs: how Fauna and Workers make it easy</b> – Rob Sutter<i>Building APIs has always been tricky when it comes to setting up architecture. FaunaDB and Workers remove that burden by letting you write code and watch it run everywhere.</i></p><p><b>Business context is developer productivity</b> – John Feminella<i>A major factor in developer productivity is whether they have the context to make decisions on their own, or if instead they can only execute someone else's plan. But how do organizations give engineers the appropriate context to make those decisions when they weren't there from the beginning?</i></p><p><b>On the edge of my server</b> – Brian Rinaldi<i>Edge functions can be potentially game changing. You get the power of serverless functions but running at the CDN level - meaning the response is incredibly fast. With Cloudflare Workers, every worker is an edge function. In this talk, we’ll explore why edge functions can be powerful and explore examples of how to use them to do things a normal serverless function can't do.</i></p><p><b>Ten things I love about Wrangler 2</b> – Sunil Pai<i>We spent the last six months rewriting wrangler, the CLI for building and deploying Cloudflare Workers. Almost every single feature has been upgraded to be more powerful and user-friendly, while still remaining backward compatible with the original version of wrangler. In this talk, we'll go through some of the best parts about the rewrite, and how it provides the foundation for all the things we want to build in the future.</i></p><p><b>L is for Literacy</b> – Henri Helvetica<i>It’s 2022, and web performance is now abundantly important, with an abundance of available metrics, used by — you guessed it — an abundance of developers, new and experienced. All quips aside, the complexities of the web has led to increased complexities in web performance. Understanding, or literacy in web performance is as important as the four basic language skills. ‘L is for Literacy’ is a lively look at performance lexicon, backed by enlightening data all will enjoy.</i></p><p><b>Cloudflare Pages Updates</b> – Greg Brimble<i>Greg Brimble, a Systems Engineer working on Pages, will showcase some of this week’s announcements live on Cloudflare TV. Tune in to see what is now possible for your Cloudflare Pages projects. We're excited to show you what the team has been working on!</i></p><p><b>Migrating to Cloudflare Pages: A look into git control, performance, and scalability</b> – James Ross<i>James Ross, CTO of Nodecraft, will discuss how moving to Pages brought an improved experience for both users and his team building the future of game servers.</i></p><p>If you want to see the full schedule for the Developer Speaker Series, go to <a href="https://www.cloudflare.com/developer-speaker-series/">our landing page</a>. It shows each talk, including speaker info and timing, as well as time zones for international viewers. When a talk goes live, tuning in is simple – just visit <a href="https://cloudflare.tv">cloudflare.tv</a> to start watching.</p><p>New this year, we’ve also prepared a Discord channel to follow the live conversation with other viewers! If you haven’t joined Cloudflare’s Discord server, <a href="https://discord.gg/cloudflaredev">get your invite</a>.</p> ]]></content:encoded>
            <category><![CDATA[Platform Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare TV]]></category>
            <guid isPermaLink="false">3Ng5TW5HjlqJyiA50o4XsU</guid>
            <dc:creator>Kristian Freeman</dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing native support for Stripe’s JavaScript SDK in Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/announcing-stripe-support-in-workers/</link>
            <pubDate>Fri, 19 Nov 2021 13:59:58 GMT</pubDate>
            <description><![CDATA[ Handling payments inside your apps is crucial to building a business online. For many developers, the leading choice for handling payments is Stripe.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Handling payments inside your apps is crucial to building a business online. For many developers, the leading choice for handling payments is <a href="https://stripe.com/">Stripe</a>. Since my first encounter with Stripe about seven years ago, the service has evolved far beyond simple payment processing. In the <a href="/building-black-friday-e-commerce-experiences-with-jamstack-and-cloudflare-workers/">e-commerce example application I shared last year</a>, Stripe managed a complete seller marketplace, using the <a href="https://stripe.com/connect">Connect</a> product. Stripe's product suite is great for developers looking to go beyond accepting payments.</p><p>Earlier versions of Stripe's SDK had core Node.js dependencies, like many popular JavaScript packages. In Stripe’s case, it interacted directly with core Node.js libraries like <code>net/http</code>, to handle HTTP interactions. For Cloudflare Workers, a V8-based runtime, this meant that the official Stripe JS library didn’t work; you had to fall back to using Stripe’s (very well-documented) REST API. By doing so, you’d lose the benefits of using Stripe’s native JS library — things like automatic type-checking in your editor, and the simplicity of function calls like <code>stripe.customers.create()</code>, instead of manually constructed HTTP requests, to interact with Stripe’s various pieces of functionality.</p><p>In April, we wrote that we were <a href="/node-js-support-cloudflare-workers/">focused on increasing</a> the number of JavaScript packages that are <a href="https://workers.cloudflare.com/works">compatible with Workers</a>:</p><blockquote><p>We won’t stop until our users can import popular Node.js libraries seamlessly. This effort will be large-scale and ongoing for us, but we think it’s well worth it.</p></blockquote><p>I’m excited to announce general availability for the <a href="https://github.com/stripe/stripe-node"><code>stripe</code></a> JS package for Cloudflare Workers. You can now use the native Stripe SDK directly in your projects! To get started, install <code>stripe</code> in your project: <code>npm i stripe</code>.</p><p>This also opens up a great opportunity for developers who have deployed sites to Cloudflare Pages to begin using Stripe right alongside their applications. With this week’s announcement of serverless function support in Cloudflare Pages, you can accept payments for your digital product, or handle subscriptions for your membership site, with a few lines of JavaScript added to your Pages project. There’s no additional configuration, and it scales automatically, just like your Pages site.</p><p>To showcase this, we’ve prepared an example open-source repository, showing how you can integrate <a href="https://stripe.com/payments/checkout">Stripe Checkout</a> into your Pages application. There’s no additional configuration needed — as we announced yesterday, Pages’ new Workers Functions features allows you to deploy infinitely-scalable functions just by adding a new <code>functions</code> folder to your project. See it in action at <a href="https://stripe.pages.dev">stripe.pages.dev</a>, or <a href="https://github.com/cloudflare/stripe.pages.dev">check out the open-source repository on GitHub</a>.</p><p>With the SDK installed, you can begin accepting payments directly in your applications. The below example shows <a href="https://stripe.com/docs/payments/accept-a-payment?integration=checkout">how to initiate a new Checkout session</a> and redirect to Stripe’s hosted checkout page:</p>
            <pre><code>import Stripe from 'stripe/lib/stripe.js';

// use web crypto 
export const webCrypto = Stripe.createSubtleCryptoProvider();

export function getStripe({env}){
  if(!env?.STRIPE_KEY){
    throw new Error('Can not initialize Stripe without STRIPE_KEY');
  }
  const client = Stripe(env.STRIPE_KEY, {
      httpClient: Stripe.createFetchHttpClient(), // ensure we use a Fetch client, and not Node's `http`
  });
  return client;
}



export default {
  async fetch(request, env) {
    const stripe = getStripe({ env })
    const session = await stripe.checkout.sessions.create({
      line_items: [{
        price_data: {
          currency: 'usd',
          product_data: {
            name: 'T-shirt',
          },
          unit_amount: 2000,
        },
        quantity: 1,
      }],
      payment_method_types: [
        'card',
      ],
      mode: 'payment',
      success_url: `${YOUR_DOMAIN}/success.html`,
      cancel_url: `${YOUR_DOMAIN}/cancel.html`,
    });

    return Response.redirect(session.url)
  }
}</code></pre>
            <p>With support for the Stripe SDK natively in Cloudflare Workers, you aren’t just limited to payment processing either. Any JavaScript example that's currently in <a href="https://stripe.com/docs/api?lang=node">Stripe’s extensive documentation</a> works directly in Workers, without needing to make any changes.</p><p>In particular, using Workers to handle the multitude of available Stripe webhooks means that you can get better visibility into how your existing projects are working, without needing to spin up any new infrastructure. The below example shows how you can securely validate incoming webhook requests to a Workers function, and perform business logic by parsing the data inside that webhook:</p>
            <pre><code>import Stripe from 'stripe/lib/stripe.js';

// use web crypto 
export const webCrypto = Stripe.createSubtleCryptoProvider();

export function getStripe({env}){
  if(!env?.STRIPE_KEY){
    throw new Error('Can not initialize Stripe without STRIPE_KEY');
  }
  const client = Stripe(env.STRIPE_KEY, {
      httpClient: Stripe.createFetchHttpClient(), // ensure we use a Fetch client, and not Node's `http`
  });
  return client;
}

export default {
  async fetch(request, env) {
    const stripe = getStripe({ env })
    const body = await request.text()
    const sig = request.headers.get('stripe-signature')
    
const event = await stripe.webhooks.constructEventAsync(
      body
      sig,
      env.STRIPE_ENDPOINT_SECRET,
      undefined,
      webCrypto
    );


    // Handle the event
    switch (event.type) {
      case 'payment_intent.succeeded':
        const paymentIntent = event.data.object;
        // Then define and call a method to handle the successful payment intent.
        // handlePaymentIntentSucceeded(paymentIntent);
        break;
      case 'payment_method.attached':
        const paymentMethod = event.data.object;
        // Then define and call a method to handle the successful attachment of a PaymentMethod.
        // handlePaymentMethodAttached(paymentMethod);
        break;
      // ... handle other event types
      default:
        console.log(`Unhandled event type ${event.type}`);
    }

    // Return a response to acknowledge receipt of the event
    return new Response(JSON.stringify({ received: true }), {
      headers: { 'Content-type': 'application/json' }
    })
  }
}
</code></pre>
            <p>We’re also announcing <a href="https://github.com/stripe-samples/stripe-node-cloudflare-worker-template">a new Workers template</a> in partnership with Stripe. The template will help you get up and running with Stripe and Workers, using our best practices.</p><p>In less than five minutes, you can begin accepting payments for your next digital product or membership business. You can also handle and validate incoming webhooks at the edge, from a single codebase. This comes with no upfront cost, server provisioning, or any of the standard scaling headaches. Take your serverless functions, deploy them to Cloudflare’s edge, and start making money!</p><blockquote><p>“We're big fans of Cloudflare Workers over here at Stripe. Between the wild performance at the edge and fantastic serverless development experience, we're excited to see what novel ways you all use Stripe to make amazing apps.”— <b>Brian Holt, Stripe</b></p></blockquote><p>We’re thrilled with this incredible update to Stripe’s JavaScript SDK. We’re also excited to see what you’ll build with native Stripe support in Workers. Join our Discord server and share what you’ve built in the #what-i-built channel — <a href="https://cloudflare.community/">get your invite here</a>.</p> ]]></content:encoded>
            <category><![CDATA[Full Stack Week]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">2jyffZ43SV9rphtVcGSHbM</guid>
            <dc:creator>Kristian Freeman</dc:creator>
        </item>
        <item>
            <title><![CDATA[Developer Spotlight: Chris Coyier, CodePen]]></title>
            <link>https://blog.cloudflare.com/developer-spotlight-codepen/</link>
            <pubDate>Wed, 17 Nov 2021 13:58:58 GMT</pubDate>
            <description><![CDATA[ Due to the nature of CodePen — namely, hosting code and an incredibly popular embedding feature, allowing developers to share their CodePen “pens” around the world — any sort of optimization can have a massive impact on CodePen’s business.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p><a href="https://twitter.com/chriscoyier">Chris Coyier</a> has been building on the web for over 15 years. Chris made his mark on the web development world with <a href="https://css-tricks.com/">CSS-Tricks</a> in 2007, one of the web's leading publications for frontend and full-stack developers.</p><p>In 2012, Chris co-founded <a href="https://codepen.io/">CodePen</a>, which is an online code editor that lives in the browser and allows developers to collaborate and share code examples written in HTML, CSS, and JavaScript.</p><p>Due to the nature of CodePen — namely, hosting code and an incredibly popular embedding feature, allowing developers to share their CodePen “pens” around the world — any sort of optimization can have a massive impact on CodePen’s business. Increasingly, CodePen relies on the ability to both execute code and store data on Cloudflare’s network as a first stop for those optimizations. As Chris puts it, CodePen uses Cloudflare Workers for "so many things":</p><blockquote><p>"We pull content from an external CMS and use Workers to manipulate HTML before it arrives to the user's browser. For example, we fetch the original page, fetch the content, then stitch them together for a full response."</p></blockquote><p>Workers allows you to work with responses directly using the native Request/Response classes and, with the addition of our streaming HTMLRewriter engine, you can modify, combine, and parse HTML without any loss in performance. Because Workers functions are deployed to Cloudflare’s network, CodePen has the ability to instantly deploy highly-intelligent middleware in-between their origin servers and their clients, without needing to spin up any additional infrastructure.</p><p>In a popular YouTube video on Chris Coyier’s YouTube channel, he sits down with a front-end engineer at CodePen, and covers how they use Cloudflare Workers to build crucial CodePen features. Here’s Chris:</p><blockquote><p>“Cloudflare Workers are like serverless functions that always run at the edge, making them incredibly fast. Not only that, but the tooling around them is really nice. They can run at particular routes on your own website, removing any awkward CORS troubles, and helping you craft clean APIs. But they also have special superpowers, like access to KV storage (also at the edge), image manipulation and optimization, and HTML rewriting.”</p></blockquote><p>CodePen also leverages Workers KV to store data. This allows them to avoid an immense amount of repetitive processing work by caching results and making them accessible on Cloudflare’s network, geographically near their users:</p><blockquote><p>"We use Workers combined with the KV Store to run expensive jobs. For example, we check the KV Store to see if we need to do some processing work, or if that work has already been done. If we need to do the work, we do it and then update KV to know it's been done and where the result of that work is."</p></blockquote><p>In a follow-up video on his YouTube channel, Chris dives into Workers KV and shows how you can build a simple serverless function — with storage — and deploy it to Cloudflare. With the addition of Workers KV, you can persist complex data structures side-by-side with your Workers function, without compromising on performance or scalability.</p><p>Chris and the CodePen team are invested in Workers and, most importantly, they <i>enjoy</i> developing with Cloudflare's developer tooling. "The DX around them is suspiciously nice. Coming from other cloud functions services, there seems to be a just-right amount of tooling to do the things we need to do."</p><p>CodePen is a great example of what’s possible when you integrate the Cloudflare Workers developer environment into your stack. Across all parts of the business, Workers, and tools like Workers KV and HTMLRewriter, allow CodePen to build highly-performant applications that look towards the future.</p><p>If you’d like to learn more about Cloudflare Workers, and deploy your own serverless functions to Cloudflare’s network, check out <a href="https://workers.cloudflare.com/">workers.cloudflare.com</a>!</p> ]]></content:encoded>
            <category><![CDATA[Full Stack Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Spotlight]]></category>
            <guid isPermaLink="false">22O0h0ZYE4FXjP9Qczzxe5</guid>
            <dc:creator>Kristian Freeman</dc:creator>
        </item>
        <item>
            <title><![CDATA[Developer Spotlight: James Ross, Nodecraft]]></title>
            <link>https://blog.cloudflare.com/developer-spotlight-nodecraft/</link>
            <pubDate>Tue, 16 Nov 2021 13:58:49 GMT</pubDate>
            <description><![CDATA[ Nodecraft allows gamers to host dedicated servers for their favorite games. James Ross is the Chief Technology Officer for Nodecraft and has advocated for Cloudflare — particularly Cloudflare Workers —  within the company. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Nodecraft allows gamers to host dedicated servers for their favorite games. James Ross is the Chief Technology Officer for Nodecraft and has advocated for Cloudflare — particularly Cloudflare Workers —  within the company.</p><blockquote><p>"We use Workers for all kinds of things. We use Workers to optimize our websites, handle redirects, and deal with image content negotiation for our main website. We're very fortunate that the majority of our users are using modern web browsers, so we can serve images in formats like JPEG XL and AVIF to users through a Workers script".</p></blockquote>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1xx49Euhf6Q76buejUInQu/ecc7d5f3daa1ffd50bb03247507fd4cb/image3-20.png" />
            
            </figure><p>Nodecraft also maintains a number of microsites and APIs that are relied upon by the gaming community to retrieve game information. <a href="https://playerdb.co/">PlayerDB</a> provides a JSON API for looking up information on user profiles for a number of gaming services, and <a href="https://mcuuid.net/">MCUUID</a> and <a href="https://steamid.net/">SteamID</a> are wrapped frontends for users of those services to interact with that API. Each of these is written and deployed as a Cloudflare Worker:</p><blockquote><p>"Whenever a player joins a Minecraft server, we want to get their information — like their name and player image — and show it to our users. That API receives a hundred million requests a month. And we use the same API endpoint for three other websites that display that data to our users."</p></blockquote>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4SkEyn5XmKLyEjsh6ou5JK/a4f5af41d0663fa8a572da0bc7bae262/image1-30.png" />
            
            </figure><p>We love these kinds of integrations between Workers and developers’ existing infrastructures because it serves as a great way to onboard a team onto the Workers platform. You can look for existing parts of your infrastructure that may be resource-intensive, slow, or expensive, and port them to Workers. In Nodecraft’s case, this relieved them of managing an incredibly high amount of maintenance cost, and the result is, simply put, peace of mind:</p><blockquote><p>"Handling three hundred millions a month in our own infrastructure would be really tough, but when we throw it all into a Worker, we don't need to worry about scale. Occasionally, someone will write a Worker, and it will service 30 million requests in a single hour... we won't even notice until we look at the stats in the Cloudflare dashboard."</p></blockquote><p>Nodecraft has been using Cloudflare for almost ten years. They first explored Workers to simplify their application’s external image assets. Their very first Worker replaced an image proxy they had previously hosted in their infrastructure and, since then, Workers has changed the way they think about building applications.</p><blockquote><p>"With the advent of Jamstack, we started using Workers Sites and began moving towards a static architecture. But many of our Workers Sites projects had essentially an infinite number of pages. For instance, Minecraft has 15 million (and growing) user profiles. It's hard to build a static page for each of those. Now, we use HTMLRewriter to inject a static page with dynamic user content."</p></blockquote><p>For James, Workers has served as a catalyst for how he understands the future of web applications. Cloudflare and the Workers development environment allows his team to stop worrying about scaling and infrastructure, and that means that Nodecraft builds on Cloudflare <i>by default</i>:</p><blockquote><p>"Workers has definitely changed how we think about building applications. Now, we think about how we can build our applications to be deployed directly to Cloudflare's edge."</p></blockquote><p>As a community moderator and incredibly active member of our Discord community, James is excited about the future of Cloudflare's stack. As we've been teasing it over the past few months, James has been looking forward to integrating Workers functions with Nodecraft’s Pages applications. The release of that feature allows Nodecraft to move entirely to a Pages-driven deployment for their sites. He's also looking forward to the release of Cloudflare R2, our storage product, because Nodecraft makes heavy use of similar storage products and would like to move entirely onto Cloudflare's tooling wherever possible.</p><p>If you’d like to learn more about Cloudflare Workers, and deploy your own serverless functions to Cloudflare’s network, check out <a href="https://workers.cloudflare.com/">workers.cloudflare.com</a>!</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div></div> ]]></content:encoded>
            <category><![CDATA[Full Stack Week]]></category>
            <category><![CDATA[Developer Spotlight]]></category>
            <guid isPermaLink="false">OE8M6dtGh45adD7PChelt</guid>
            <dc:creator>Kristian Freeman</dc:creator>
        </item>
        <item>
            <title><![CDATA[Get started Building Web3 Apps with Cloudflare]]></title>
            <link>https://blog.cloudflare.com/get-started-web3/</link>
            <pubDate>Fri, 01 Oct 2021 12:59:34 GMT</pubDate>
            <description><![CDATA[ Learn how to build Web3 applications with Cloudflare’s new open-source template. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ob5X20V0l91LLtN25Jd4r/18cd9d69eee4cbcaaa21ea9afc3a9db5/image1-5.png" />
            
            </figure><p>For many developers, the term Web3 feels like a buzzword — it's the sort of thing you see on a popular "<i>Things you need to learn in 2021</i>" tweet. As a software developer, I've spent years feeling the same way. In the last few months, I’ve taken a closer look at the Web3 ecosystem, to better understand how it works, and why it matters.</p><p>Web3 can generally be described as a decentralized evolution of the Internet. Instead of a few providers acting as the mediators of how your interactions and daily life on the web should work, a Web3-based future would liberate your data from proprietary databases and operate without centralization via the incentive structure inherent in blockchains.</p><p>The Web3 space in 2021 looks and feels much different from what it did a few years ago. Blockchains like <a href="https://ethereum.org/en/">Ethereum</a> are handling incredible amounts of traffic with relative ease — although some improvements are needed — and newer blockchains like Solana have entered the space as genuine alternatives that could alleviate some of the scaling issues we've seen in the past few years.</p><p>Cloudflare is incredibly well-suited to empower developers to build the future with Web3. The announcement of <a href="/announcing-web3-gateways/">Cloudflare’s Ethereum gateway</a> earlier today will enable developers to build scalable Web3 applications on Cloudflare’s reliable network. Today, we’re also releasing an open-source example showing how to deploy, mint, and render NFTs, or non-fungible tokens, using <a href="https://workers.cloudflare.com">Cloudflare Workers</a> and <a href="https://pages.cloudflare.com">Cloudflare Pages</a>. You can <a href="https://cf-web3.pages.dev/">try it out here</a>, or check out the <a href="https://github.com/signalnerve/cfweb3">open-source codebase on GitHub</a> to get started deploying your own NFTs to production.</p>
    <div>
      <h3>The problem Web3 solves</h3>
      <a href="#the-problem-web3-solves">
        
      </a>
    </div>
    <p>When you begin to read about Web3 online, it's easy to get excited about the possibilities. As a software developer, I found myself asking: "<i>What actually is a Web3 application? How do I build one?</i>"</p><p>Most traditional applications make use of three pieces: the database, a code interface to that database, and the user interface. This model — best exemplified in the Model-View-Controller (MVC) architecture — has served the web well for decades. In MVC, the database serves as the storage system for your data models, and the controller determines how clients interact with that data. You define views with HTML, CSS and JavaScript that take that data and display it, as well as provide interactions for creating and updating that data.</p><p>Imagine a social media application with a billion users. In the MVC model, the data models for this application include all the user-generated content that are created daily: posts, friendships, events, and anything else. The controllers written for that application determine who can interact with that data internally; for instance, only the two users in a private conversation can access that conversation. But those controllers — and the application as a whole — don't allow external access to that data. The social media application owns that data and leases it out "for free" in exchange for viewing ads or being tracked across the web.</p><p>This was the lightbulb moment for me: understanding how Web3 offers a compelling solution to these problems. If the way MVC-based, Web 2.0 applications has presented itself is as a collection of "walled gardens" — meaning disparate, closed-off platforms with no interoperability or ownership of data — Web3 is, by design, the exact opposite.</p><p>In Web3 applications, there are effectively two pieces. The blockchain (let's use Ethereum as our example), and the user interface. The blockchain has two parts: an account, for a user, a group of users, or an organization, and the blockchain itself, which serves as an immutable system of record of everything taking place on the network.</p><p>One crucial aspect to understand about the blockchain is the idea that code can be deployed to that blockchain and that users of that blockchain can execute the code. In Ethereum, this is called a "smart contract". Smart contracts executed against the blockchain are like the controller of our MVC model. Instead of living in shrouded mystery, smart contracts are verifiable, and the binary code can be viewed by anyone.</p><p>For our hypothetical social media application, that means that any actions taken by a user are not stored in a central database. Instead, the user interacts with the smart contract deployed on the blockchain network, using a program that can be verified by anyone. Developers can begin building user interfaces to display that information and easily interact with it, with no walled gardens or platform lock-in. In fact, another developer could come up with a better user interface or smart contract, allowing users to move between these interfaces and contracts based on which aligns best with their needs.</p><p>Operating with these smart contracts happens via a wallet (for instance, an Ethereum wallet managed by <a href="https://metamask.io/">MetaMask</a>). The wallet is owned by a user and not by the company providing the service. This means you can take your wallet (the final authority on your data) and do what you want with it at any time. Wallets themselves are another programmable aspect of the blockchain — while they can represent a single user, they can also be complex multi-signature wallets that represent the interests of an entire organization. Owners of that wallet can choose to make consensus decisions about what to do with their data.</p><blockquote><p>people are talking trash about "web3" as a term,</p><p>but having all my data on multiple websites is cool</p><p>and having websites compete on interfaces for the same data is rad</p><p>— pm (@pm) <a href="https://twitter.com/pm/status/1434251123532070912?ref_src=twsrc%5Etfw">September 4, 2021</a></p></blockquote>
    <div>
      <h3>The rise of non-fungible tokens</h3>
      <a href="#the-rise-of-non-fungible-tokens">
        
      </a>
    </div>
    <p>One of the biggest recent shifts in the Web3 space has been the growth of NFTs — non-fungible tokens. Non-fungible tokens are unique assets stored on the blockchain that users can trade and verify ownership of. In 2019, Cloudflare was already writing about NFTs, as part of our <a href="/cloudflare-ethereum-gateway">announcement of the Cloudflare Ethereum Gateway</a>. Since then, NFTs have exploded in popularity, with projects like <a href="https://www.larvalabs.com/cryptopunks">CryptoPunks</a> and <a href="https://boredapeyachtclub.com/#/">Bored Ape Yacht Club</a> trading millions of dollars in volume monthly.</p><p>NFTs are a fascinating addition to the Web3 space because they represent how ownership of data and community can look in a post-walled garden world. If you've heard of NFTs before, you may know them as a very visual medium: CryptoPunks and Bored Ape Yacht Club are, at their core, art. You can buy a Punk or Ape and use it as your profile picture on social media. But underneath that, owning an Ape isn’t just owning a profile picture; they also have exclusive ownership of a blockchain-verified asset.</p><p>It should be noted that the proliferation of NFT contracts led to an increase in the number of <a href="https://www.theverge.com/22683766/nft-scams-theft-social-engineering-opensea-community-recovery">scams</a>. Blockchain-based NFTs are a medium of conveying ownership, based on a given smart contract. This smart contract can be deployed by anyone, and associated with any content. There is no guarantee of authenticity, until you verify the trustworthiness and identity of the contract you are interacting with. Some platforms may support <i>Verified</i> accounts, while others are only allowing a set of trusted partners to appear on their platform. NFTs are flexible enough to allow multiple approaches, but these trust assumptions have to be communicated clearly.</p><p>That asset, tied to a smart contract deployed on Ethereum, can be traded, verified, or used as a way to gate access to programs. An NFT developer can hook into the trade event for their NFTs and charge a royalty fee, or when "minting", or creating an NFT, they can charge a mint price, generating revenue on sales and trades to fund their next big project. In this way, NFTs can create strong incentive alignment between developers and community members, more so than your average web application.</p>
    <div>
      <h3>What we built</h3>
      <a href="#what-we-built">
        
      </a>
    </div>
    <p>To better understand Web3 (and how Cloudflare fits into the puzzle), we needed to build something using the Web3 stack, end-to-end.</p><p>To allow you to do the same, we’re open-sourcing a full-stack application today, showing you how to mint and manage an NFT from start to finish. <a href="https://rinkeby.etherscan.io/token/0x290422EC6eADc2CC12aCd98C50333720382CA86B">The smart contract for the application is deployed and verified</a> on Ethereum's <a href="https://www.rinkeby.io/">Rinkeby</a> network, which is a testing environment for Ethereum projects and smart contracts. The Rinkeby test network allows you to test the smart contract off of the main blockchain, using the exact same workflow, without using real ethers. When your project is ready to be deployed on Ethereum’s Mainnet, you can take the same contract, deploy and verify it, and begin using it in production.</p><p>Once deployed, the smart contract will provide the ability to manage your NFT project, compliant with <a href="https://eips.ethereum.org/EIPS/eip-721">the ERC-721 spec</a>, that can be minted by users, displayed on NFT marketplaces like <a href="https://opensea.io">OpenSea</a> and your own web applications. We also provided a web interface and example code for minting these NFTs — as a user, you can visit the web application with a compatible Ethereum wallet installed and claim a NFT.</p><p>Once you've minted the NFT, the example user interface will render the metadata for each claimed NFT. According to the ERC-721 (NFT) spec, a deployed token must have a corresponding URL that provides JSON metadata. This JSON endpoint, which we've built with Cloudflare Workers, returns a name and description for each unique NFT, as well as an image. To host this image, we've used Infura to pin the service, and Cloudflare IPFS Gateway to serve it. Our NFT identifies the content via its hash, making it not replaceable with something different in the future.</p><p>This open-source project provides all the tools that you need to build an NFT project. By building on Workers and Pages, you have all the tools you need to scale a successful NFT launch, and always provide up-to-date metadata for your NFT assets as users mint and trade them between wallets.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/J6PouNAVNlOShutQ8kwX9/01e5a2c22ff367481436b14f73a207e2/image2-3.png" />
            
            </figure><p>Architecture diagram of Cloudflare’s open-source NFT project</p>
    <div>
      <h3>Cloudflare + Web3</h3>
      <a href="#cloudflare-web3">
        
      </a>
    </div>
    <p>Cloudflare's developer platform — including Workers, Pages, and the IPFS gateway — works together to provide scalable solutions at each step of your NFT project's lifecycle. When you move your NFT project to production, Cloudflare’s <a href="/announcing-web3-gateways/">Ethereum</a> and <a href="/announcing-web3-gateways/">IPFS</a> gateways are available to handle any traffic that your project may have.</p><p>We're excited about Web3 at Cloudflare. The world is shifting back to a decentralized model of the Internet, the kind envisioned in the early days of the World Wide Web. As we say a lot around Cloudflare, <a href="/the-network-is-the-computer/">The Network is the Computer</a> — we believe that whatever form Web3 may take, whether through projects like Metaverses, DAOs (decentralized autonomous organizations) and NFTs for community and social networking, DeFi (decentralized finance) applications for managing money, and a whole class of decentralized applications that we probably haven't even thought of...  Cloudflare will be foundational to that future.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Cloudflare Pages]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">1xYnEmP6dc9MOQXD51cQKN</guid>
            <dc:creator>Kristian Freeman</dc:creator>
            <dc:creator>Jonathan Kuperman</dc:creator>
        </item>
        <item>
            <title><![CDATA[Modernizing a familiar approach to REST APIs, with PostgreSQL and Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/modernizing-a-familiar-approach-to-rest-apis-with-postgresql-and-cloudflare-workers/</link>
            <pubDate>Wed, 04 Aug 2021 12:56:38 GMT</pubDate>
            <description><![CDATA[ By using PostgREST with Postgres, we can build REST API-based applications. In particular, it's an excellent fit for Cloudflare Workers, our serverless function platform. Workers is a great place to build REST APIs. ]]></description>
            <content:encoded><![CDATA[ <p><a href="http://postgresql.com/">Postgres</a> is a ubiquitous open-source database technology. It contains a vast number of features and offers rock-solid reliability. It's also one of the most popular <a href="https://www.cloudflare.com/developer-platform/products/d1/">SQL database tools</a> in the industry. As the industry builds “modern” developer experience tools—real-time and highly interactive—Postgres has also served as a great foundation. Projects like <a href="https://hasura.io/">Hasura</a>, which offers a real-time GraphQL engine, and <a href="https://supabase.io/">Supabase</a>, an open-source Firebase alternative, use Postgres under the hood. This makes Postgres a technology that every developer should know, and consider using in their applications.</p><p>For many developers, REST APIs serve as the primary way we interact with our data. Language-specific libraries like <a href="https://node-postgres.com"><code>pg</code></a> allow developers to connect with Postgres in their code, and directly interact with their databases. Yet in almost every case, developers reinvent the wheel, building the same connection logic on an app-by-app basis.</p><p>Many developers building applications with <a href="https://workers.cloudflare.com/">Cloudflare Workers</a>, our serverless functions platform, have asked how they can use Postgres in Workers functions. Today, we're releasing <a href="https://developers.cloudflare.com/workers/tutorials/postgres">a new tutorial for Workers</a> that shows how to connect to Postgres inside Workers functions. Built on <a href="http://postgrest.com/">PostgREST</a>, you'll write a REST API that communicates directly with your database, on the edge.</p><p>This means that you can entirely build applications on Cloudflare’s edge — using Workers as a performant and globally-distributed API, and <a href="https://pages.cloudflare.com/">Cloudflare Pages</a>, our Jamstack deployment platform, as the <a href="https://www.cloudflare.com/developer-platform/solutions/hosting/">host for your frontend user interface</a>. With Workers, you can add new API endpoints and handle authentication <i>in front</i> of your database without needing to alter your Postgres configuration. With features like Workers KV and Durable Objects, Workers can provide globally-distributed caching in front of your Postgres database. <a href="/introducing-websockets-in-workers/">Features like WebSockets</a> can be used to build real-time interactions for your applications, without having to migrate from Postgres to a new database-as-a-service platform.</p><p>PostgREST is an open-source tool that generates a standards-compliant REST API for your Postgres databases. Many growing database-as-a-service startups like <a href="https://retool.com/">Retool</a> and <a href="http://supabase.com/">Supabase</a> use PostgREST under the hood. PostgREST is fast and has great defaults, allowing you to access your Postgres data using standard REST conventions.</p><p>It’s great to be able to access your database directly from Workers, but do you really want to expose your database directly to the public Internet? Luckily, Cloudflare has a solution for this, and it works great with PostgREST: <a href="https://www.cloudflare.com/products/tunnel/">Cloudflare Tunnel</a>. Cloudflare Tunnel is one of my personal favorite products at Cloudflare. It creates a secure tunnel between your local server and the Cloudflare network. We want to expose our PostgREST endpoint, without making our entire database available on the public internet. Cloudflare Tunnel allows us to do that securely.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3yA7ys92hJceHCMsd04rkT/edb36c84a2ea43d56c4f3374c60a3a0b/image1-4.png" />
            
            </figure><p>By using PostgREST with Postgres, we can build REST API-based applications. In particular, it's an excellent fit for Cloudflare Workers, our serverless function platform. Workers is a great place to build REST APIs. With the open-source JavaScript library <a href="https://github.com/supabase/postgrest-js"><code>postgrest-js</code></a>, we can interact with a PostgREST endpoint from inside our Workers function, using simple JS-based primitives.</p><p><i>By the way — if you haven't built a REST API with Workers yet, </i><a href="https://egghead.io/courses/build-a-serverless-api-with-cloudflare-workers-d67ca551?af=a54gwi"><i>check out our free video course with Egghead: "Building a Serverless API with Cloudflare Workers"</i></a><i>.</i></p><p>Scaling applications built on Postgres is an incredibly common problem that developers face. Often, this means duplicating your Postgres database and distributing reads between your primary database, and a fleet of “read replicas”. With PostgREST and Workers, we can begin to explore a different approach to solving the scaling problem. <a href="https://developers.cloudflare.com/workers/learning/how-workers-works">Workers' unique architecture</a> allows us to deploy hyper-performant functions <i>in front</i> of Postgres databases. With tools like Workers KV and Durable Objects, exposed in Workers as basic JavaScript APIs, we can build intelligent caches for our databases, without sacrificing performance or developer experience.</p><p>If you'd like to learn more about building REST APIs in Cloudflare Workers using PostgREST, <a href="https://developers.cloudflare.com/workers/tutorials/postgres">check out our new tutorial</a>! We've also provided two open-source libraries to help you get started. <a href="https://github.com/cloudflare/postgres-postgrest-cloudflared-example"><code>cloudflare/postgres-postgrest-cloudflared-example</code></a> helps you set up a Cloudflare Tunnel-backed Postgres + PostgREST endpoint. <a href="https://github.com/cloudflare/postgrest-worker-example"><code>postgrest-worker-example</code></a> is an example of using postgrest-js inside of Cloudflare Workers, to build REST APIs with your Postgres databases.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3qDgnCYYBrkCji8WvUgM33/09114cef001625e6e56236d2a3575cc0/image2-4.png" />
            
            </figure><p>With <code>postgrest-js</code>, you can build dynamic queries and request data from your database using the JS primitives you know and love:</p>
            <pre><code>// Get all users with at least 100 followers
const { data: users, error } = await client
.from('users')
.select(‘*’)
.gte('followers', 100)</code></pre>
            <p>You can also join our Cloudflare Developers Discord community! Learn more about what you can build with Cloudflare Workers, and meet our wonderful community of developers from around the world. <a href="https://discord.gg/cloudflaredev">Get your invite link here.</a></p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Postgres]]></category>
            <guid isPermaLink="false">8tTIUN8pM2HWOKhmrZfSG</guid>
            <dc:creator>Kristian Freeman</dc:creator>
        </item>
        <item>
            <title><![CDATA[Building real-time games using Workers, Durable Objects, and Unity]]></title>
            <link>https://blog.cloudflare.com/building-real-time-games-using-workers-durable-objects-and-unity/</link>
            <pubDate>Wed, 26 May 2021 13:00:05 GMT</pubDate>
            <description><![CDATA[ Durable Objects are an awesome addition to the Workers developer ecosystem, allowing you to address and work inside a specific Worker to provide consistency in your applications. You might be wondering "Okay, so what can I build with that?" ]]></description>
            <content:encoded><![CDATA[ <p></p><p><a href="https://developers.cloudflare.com/workers/learning/using-durable-objects">Durable Objects</a> are an awesome addition to the Workers developer ecosystem, allowing you to address and work inside a specific Worker to provide consistency in your applications. That sounds exciting at a high-level, but if you're like me, you might be wondering "Okay, so what can I build with that?"</p><p>There’s nothing like building something real with a technology to truly understand it.</p><p>To better understand why Durable Objects matter, and how <a href="/introducing-websockets-in-workers/">newer announcements in the Workers ecosystem like WebSockets</a> play with Durable Objects, I turned to a category of software that I've been building in my spare time for a few months now: video games.</p><p>The technical aspects of games have changed drastically in the last decade. Many games are online-by-default, and the ubiquity of tools like <a href="https://unity.com/">Unity</a> have made it so anyone can begin experimenting with developing games.</p><p>I've heard a lot about the ability of Durable Objects and WebSockets to provide real-time consistency in applications, and to test that use case out, I've built <a href="https://durable-world.pages.dev/">Durable World</a>: a simple 3D multiplayer world that is deployed entirely on our Cloudflare stack: Pages for serving the client-side game, which runs in Unity and WebGL, and Workers as the coordination layer, using Durable Objects and WebSockets to sync player position and other information, like randomly generated usernames.</p><blockquote><p>playing multiplayer on the edge with some of my friends from the <a href="https://twitter.com/CloudflareDev?ref_src=twsrc%5Etfw">@cloudflaredev</a> discord?</p><p>client-side:- <a href="https://twitter.com/unity?ref_src=twsrc%5Etfw">@unity</a> webgl + websockets client- hosted on cf pages</p><p>serverless:- cloudflare workers- durable objects- websockets <a href="https://t.co/Ef3nr76D2e">pic.twitter.com/Ef3nr76D2e</a></p><p>— kristian (@signalnerve) <a href="https://twitter.com/signalnerve/status/1384274000986005511?ref_src=twsrc%5Etfw">April 19, 2021</a></p></blockquote><p>3D games tend to look really impressive — they serve as great tech demos. Even with the "wow factor" of seeing people connect from all over the world and move around the map with you, you'd probably be surprised at how simple the corresponding code for this project is. Let's dive into both the client and server aspects of Durable World, and then I'll give some thoughts on how a project like this could evolve in the future, and what I'd like to explore next.</p><p>Separately from this blog post, we also <a href="/doom-multiplayer-workers/">recently published a post on Cloudflare’s blog showing a multiplayer Doom port</a>, on Workers using WebAssembly and Durable Objects. The number of use-cases for games on Workers is remarkably strong with the addition of tools like Durable Objects, WebSockets, and WebAssembly, whether you’re porting existing games, or building entirely new ones.</p><p>Durable World is built using an <i>authoritative client</i> model. The client runs a compiled game directly in the browser, built into WebAssembly, so it can run without needing to download a platform-specific client to your local machine. The server, which runs entirely on Cloudflare Workers, can be interacted with via WebSockets, and uses Durable Objects to manage game state.</p><p>Much like the Doom example we showcased on our blog, the Durable Object managed by the Workers application acts as a message router, accepting game state changes from clients, and retaining a list of active clients that receive those updates via the Worker.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6h6IhL6O32SUwdi9vXF1cl/3bf80c9508c1666a2f71b2c3513dd40e/image2-7.png" />
            
            </figure>
    <div>
      <h3>Managing connections: the Character Durable Object</h3>
      <a href="#managing-connections-the-character-durable-object">
        
      </a>
    </div>
    <p>My biggest fear before embarking on this project was working with Durable Objects. Even though I've never made any sort of serious game with Unity, and I couldn't even define C# variables without doing Google searches on basic syntax, something about the conceptual pieces of Durable Objects has continued to be intimidating to me, down to the moment I started writing actual code.</p><p>Imagine my surprise when writing Durable Objects and working with the API turned out to be incredibly easy.</p><p>The Character module, a Durable Object using our new module support in Workers, is built on top of our <a href="https://github.com/cloudflare/modules-rollup-esm">modules-rollup-esm</a> template. The module handles incoming requests, and acts as a WebSocket provider for clients:</p>
            <pre><code>export class Character {
  constructor(state, env) {
    this.state = state;
    this.env = env
  }

  async initialize() {
    let stored = await this.state.storage.get("state");
    this.value = stored || { users: [], websockets: [] }
  }

  async handleSession(websocket, ip) {
    websocket.accept()
    // Game state code
  }

  // Handle HTTP requests from clients.
  async fetch(request) {
    if (!this.initializePromise) {
      this.initializePromise = this.initialize().catch((err) =&gt; {
        this.initializePromise = undefined;
        throw err
      });
    }
    await this.initializePromise;

    // Apply requested action.
    let url = new URL(request.url);

    switch (url.pathname) {
      case "/websocket":
        if (request.headers.get("Upgrade") != "websocket") {
          return new Response("Expected websocket", { status: 406 })
        }
        let ip = request.headers.get("CF-Connecting-IP");
        let pair = new WebSocketPair();
        await this.handleSession(pair[1], ip);
        return new Response(null, { status: 101, webSocket: pair[0] });
      case "/":
        break;
      default:
        return new Response("Not found", { status: 404 });
    }

    return new Response(this.value);
  }
}</code></pre>
            <p>Much of this is conceptually identical to our <a href="https://github.com/cloudflare/websocket-template">websocket-template</a> — we look for an Upgrade header in the incoming request, and set up a WebSocketPair, which contains a <i>server</i> and a <i>client</i> WebSocket.</p><p>The handleSession function is where the bulk of our game-specific logic takes place. In this case, our Durable Objects + WebSocket code has two tasks to manage: first, handling new players — giving them a randomly generated username, and setting them up with a valid WebSocket, and second, accepting new player positions, and broadcasting those positions to everyone currently in the game. The `tick` function is used to broadcast game state to our clients, and the remainder of the code parses incoming data, and determines which WebSocket clients should be receiving new data. The code to do this is seen below:</p>
            <pre><code>async tick(skipKey) {
  const users = this.value.users.filter(user =&gt; user.id !== skipKey)
  this.value.websockets
    .forEach(
      ({ id, name, websocket }) =&gt; {
        websocket.send(
          JSON.stringify({
            id,
            name,
            users
          })
        )
      }
    )
}

async key(ip) {
  const text = new TextEncoder().encode(`${this.env.SECRET}-${ip}`)
  const digest = await crypto.subtle.digest({ name: "SHA-256", }, text)
  const digestArray = new Uint8Array(digest)
  return btoa(String.fromCharCode.apply(null, digestArray))
}

constructName() {
  function titleCase(str) {
    return str.toLowerCase().split(' ').map(function (word) {
      return word.replace(word[0], word[0].toUpperCase());
    }).join(' ');
  }

  return titleCase(faker.fake("{{commerce.color}} {{hacker.adjective}} {{hacker.abbreviation}}"))
}

async handleSession(websocket, ip) {
  websocket.accept()

  try {
    let currentState = this.value;
    const key = await this.key(ip)

    const name = this.constructName()
    let newUser = { id: key, name, position: '0.0,0.0,0.0', rotation: '0.0,0.0,0.0' }
    if (!currentState.users.find(user =&gt; user.id === key)) {
      currentState.users.push(newUser)
      currentState.websockets.push({ id: key, name, websocket })
    }

    this.value = currentState
    this.tick(key)

    websocket.addEventListener("message", async msg =&gt; {
      try {
        let { type, position, rotation } = JSON.parse(msg.data)
        switch (type) {
          case 'POSITION_UPDATED':
            let user = currentState.users.find(user =&gt; user.id === key)
            if (user) {
              user.position = position
              user.rotation = rotation
            }

            this.value = currentState
            this.tick(key)

            break;
          default:
            console.log(`Unknown type of message ${type}`)
            websocket.send(JSON.stringify({ message: "UNKNOWN" }))
            break;
        }
      } catch (err) {
        websocket.send(JSON.stringify({ error: err.toString() }))
      }
    })

    const closeOrError = async evt =&gt; {
      currentState.users = currentState.users.filter(user =&gt; user.id !== key)
      currentState.websockets = currentState.websockets.filter(user =&gt; user.id !== key)
      this.value = currentState
      this.tick(key)
    }

    websocket.addEventListener("close", closeOrError)
    websocket.addEventListener("error", closeOrError)
  } catch (err) {
    websocket.send(JSON.stringify({ message: err.toString() }))
  }
}</code></pre>
            <p>When setting up a new WebSocketPair, the Workers function creates a unique ID derived from the user's IP address (though you could just as easily use a UUID or anything else), and begins sending WebSocket data down to the new client. When data comes in (e.g. a new player position), the function looks at <i>who</i> is sending it, and sends the new information to every <i>other</i> WebSocket currently in the game.</p>
    <div>
      <h3>Handling player position and movement: building with Durable Objects in Unity</h3>
      <a href="#handling-player-position-and-movement-building-with-durable-objects-in-unity">
        
      </a>
    </div>
    <p>Unity is a great game engine for someone like me: a fairly experienced programmer who has <i>no</i> experience in making games. I've been working with Unity on and off for years, but in the last few months I've been diving deep into it and expanding my understanding of how to actually build real games.</p><p>Here's what you need to know about Unity in the context of building Durable World: Game Objects are the primary <i>class</i> of everything in Unity, and using C# scripts you can program different behaviors for your Game Objects, whether networked or local to the player.</p><p>In our game, there are three distinct types of Game Objects. First, there's the world itself — a collection of static meshes, mostly cubes. These meshes aren't represented inside the networked aspects of the game at all. Via a series of <i>colliders</i>, any other Game Objects that navigate on top of or around these meshes are stopped from falling through floors and moving through walls. This same sort of design is what you've seen in every 3D game over the last twenty years, including classics like Super Mario 64.</p><div></div><p><a href="https://giphy.com/gifs/glitch-cheat-mario-64-fTmOPDE3jH9qP9HVuL">via GIPHY</a></p><p>In Durable World, your player Game Object is a simple capsule. This shape is built into Unity, and by attaching a C# script we can have basic movement using keyboard controls (in my case, I used <a href="https://www.youtube.com/watch?v=4HpC--2iowE">this tutorial from Brackey</a>).</p><p>Multiplayer characters are represented as a simplified version of the same player capsule. Instead of attaching any sort of input logic (keyboard, mouse, etc.) to these Game Objects, the crucial aspects of their location in the 3D space — namely, position and rotation — are managed by the WebSocket client.</p><p>When the game starts, you're placed in a single-player environment: your character can move around the static 3D world. Once the game connects to Workers and receives a WebSocket, it can begin to act in a multiplayer context. Here's a wireframe look at the world before it starts up:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ODKLZQJIpgx1RpzAeEbvp/74db222f9536c968203270a7a1e4fc23/image4-11.png" />
            
            </figure><p>When it comes to the actual code for the project, the connection aspects are quite simple: a Connection singleton is created when the game starts, which uses a WebSocket class to connect to Workers and call a variety of functions on new WebSocket updates. You can find the complete code <a href="https://github.com/signalnerve/durable-world/blob/main/packages/unity/Assets/Connection.cs">here</a>, but I'll summarize the important parts below.</p><p>First, we need to <i>send</i> the position of your player back up to Workers. This happens in a loop, called every 0.2 seconds. The UpdatePosition function takes the player position and rotation, encodes them into JSON, and sends the data up to the WebSocket. Note that by sending the position every 0.2 seconds, we're effectively building a player that updates at five frames per second. Considering that most games run at <i>at least</i> 30 frames per second, if not higher, this will be a problem we'll solve later using interpolation.</p>
            <pre><code>using System;
using System.Collections;
using System.Collections.Generic;
using UnityEngine;

using NativeWebSocket;

public class Connection : MonoBehaviour
{
  WebSocket websocket;

  // Start is called before the first frame update
  void Start()
  {
    Connect();
  }

  async void Connect()
  {
    retries += 1;

    if (maxRetries &lt; retries)
    {
      return;
    }

    websocket = new WebSocket("wss://durable-world.signalnerve.workers.dev/websocket");

    websocket.OnOpen += () =&gt;
    {
      Debug.Log("Connection open!");
    };

    websocket.OnError += (e) =&gt;
    {
      Debug.Log("Error! " + e);
      Connect();
    };

    websocket.OnClose += (e) =&gt;
    {
      Debug.Log("Connection closed!" + e);
      Connect();
    };

    websocket.OnMessage += (bytes) =&gt;
    {
      // Do things with new messages
    };

    // Keep sending messages at every 0.2 seconds
    InvokeRepeating("UpdatePosition", 0.0f, 0.2f);

    // waiting for messages
    await websocket.Connect();
  }

  void Update()
  {
#if !UNITY_WEBGL || UNITY_EDITOR
    websocket.DispatchMessageQueue();
#endif
  }

  async void UpdatePosition()
  {
    if (websocket.State == WebSocketState.Open)
    {
      var currentPos = player.transform.position;
      if (currentPos == lastPosition)
      {
        return;
      }

      PlayerPosition playerPosition = new PlayerPosition();
      playerPosition.position = $"{currentPos.x},{currentPos.y},{currentPos.z}";
      var currentRot = player.transform.rotation;
      playerPosition.rotation = $"{currentRot.eulerAngles.x},{currentRot.eulerAngles.y},{currentRot.eulerAngles.z}";
      playerPosition.type = "POSITION_UPDATED";
      await websocket.SendText(JsonUtility.ToJson(playerPosition));
      lastPosition = currentPos;
    }
  }

  private async void OnApplicationQuit()
  {
    await websocket.Close();
  }
}</code></pre>
            <p>Next, we need to listen for other players currently in the game. To handle this, we listen to incoming WebSocket messages <i>from</i> Workers. Each message will contain the entirety of our game state (something we could definitely optimize in the future), which we can parse and use to make decisions about how our local version of the game should update. For each user in our gameState payload, we can create a new instance of a player, and begin tracking it locally. We can also update position, rotation, and set a simple UI element indicating the player's name, inside of CreateClient:</p>
            <pre><code>using System;
using System.Collections;
using System.Collections.Generic;
using UnityEngine;

using NativeWebSocket;

public class Connection : MonoBehaviour
{
  async void Connect()
  {
    // Truncated code

    websocket.OnMessage += (bytes) =&gt;
    {
      var payload = System.Text.Encoding.UTF8.GetString(bytes);
      GameState gameState = JsonUtility.FromJson&lt;GameState&gt;(payload);

      foreach (var user in gameState.users)
      {
        try
        {
          if (user.id == gameState.id)
          {
            continue;
          }

          Client client;
          if (!Clients.TryGetValue(user.id, out client))
          {
            client = CreateClient(user);
          }

          var rt = user.rotation.Split(","[0]); // gets 3 parts of the vector into separate strings
          var rtx = float.Parse(rt[0]);
          var rty = float.Parse(rt[1]);
          var rtz = float.Parse(rt[2]);
          var newRot = Quaternion.Euler(rtx, rty, rtz);
          client.interpolateMovement.endRotation = newRot;

          var pt = user.position.Split(","[0]); // gets 3 parts of the vector into separate strings
          var ptx = float.Parse(pt[0]);
          var pty = float.Parse(pt[1]);
          var ptz = float.Parse(pt[2]);
          var newPos = new Vector3(ptx, pty, ptz);
          client.interpolateMovement.endPosition = newPos;
        }
        catch (Exception e)
        {
          Debug.Log(e);
        }
      }

      TMPro.TextMeshProUGUI text = onlineText.GetComponent&lt;TMPro.TextMeshProUGUI&gt;();
      text.text = $"Online: {gameState.users.Length + 1}\\nPlaying as {gameState.name}";
    };

    // Keep sending messages at every 0.2 seconds
    InvokeRepeating("UpdatePosition", 0.0f, 0.2f);

    // waiting for messages
    await websocket.Connect();
  }

  Client CreateClient(User user)
  {
    var newClient = new Client();
    newClient.id = user.id;
    var otherPlayer = Instantiate(otherPlayerPrefab, new Vector3(0, 0, 0), Quaternion.identity);
    otherPlayer.name = user.id;

    TMPro.TextMeshPro text = otherPlayer.GetComponentInChildren&lt;TMPro.TextMeshPro&gt;();
    text.text = user.name;

    newClient.playerObject = otherPlayer;
    newClient.interpolateMovement = otherPlayer.GetComponent&lt;InterpolateMovement&gt;();
    Clients.Add(user.id, newClient);
    return newClient;
  }

  // Truncated code
}</code></pre>
            <p>With all of this code set up, we've established a simple system for sending our local player position to Workers. When my player position updates, everyone else in the game receives the position as part of the larger game state payload, and updates the local copy of each player accordingly.</p><p>I mentioned that these updates happen <i>every 0.2 seconds</i>. Games are expected to update <i>at least</i> thirty times a second, if not more: modern games are generally expected to run at 60 frames per second, and update extremely quickly.</p><p>It's because of that expectation that we need to <i>interpolate</i> movement for our players. Instead of sending player updates sixty times a second, which would be a huge load on our Durable Object, we can look at the incoming new position or rotation for an object, and use some math to smooth the movement from where a player <i>is</i> to where they are <i>going</i>. Unity (and many other game engines) provide this behavior via APIs like SmoothDamp — a function for smoothing rapid, jarring movement over time — as seen below in the InterpolateMovement script, which is used to manage player position and rotation:</p>
            <pre><code>using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class InterpolateMovement : MonoBehaviour
{
  public Vector3 endPosition;
  public Quaternion endRotation;

  public float rotationSmoothTime = 0.3f;
  public float positionSmoothTime = 0.6f;
  private Vector3 posVelocity = Vector3.zero;
  private float rotVelocity = 0.0f;

  void Update()
  {
    transform.position = Vector3.SmoothDamp(transform.position, endPosition, ref posVelocity, positionSmoothTime);

    float delta = Quaternion.Angle(transform.rotation, endRotation);
    if (delta &gt; 0f)
    {
      float t = Mathf.SmoothDampAngle(delta, 0.0f, ref rotVelocity, rotationSmoothTime);
      t = 1.0f - (t / delta);
      transform.rotation = Quaternion.Slerp(transform.rotation, endRotation, t);
    }
  }
}</code></pre>
            
    <div>
      <h3>What’s next</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>The availability of tools like Durable Objects and WebSockets at the edge unlocks a new class of application that we can build with Cloudflare Workers. Games are just a single use case, and we’ve only begun exploring the possibilities for real-time, highly interactive games at the edge. If you're interested in checking out the source for Durable World, you can check it out <a href="https://github.com/signalnerve/durable-world">on GitHub</a>. <a href="https://discord.com/invite/cloudflaredev">Join us in our Cloudflare Workers Discord</a> if you want to chat Workers, Durable Objects, or anything else exploring new, interesting stuff we can build in a distributed serverless context.</p> ]]></content:encoded>
            <category><![CDATA[Durable Objects]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">4DPMTEfNaqfE6V3ocAVBrU</guid>
            <dc:creator>Kristian Freeman</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing WebSockets Support in Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/introducing-websockets-in-workers/</link>
            <pubDate>Wed, 14 Apr 2021 13:02:00 GMT</pubDate>
            <description><![CDATA[ WebSockets are a powerful new addition to the Workers platform that unlocks real-time use cases in your application. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>UPDATE - JAN, 24, 2022: We've made optimizations that can reduce your bill when using WebSockets with Workers Unbound. For more information, see <a href="/workers-optimization-reduces-your-bill/">A Workers optimization that reduces your bill</a>.</p><p>Today, we're releasing support for WebSockets in Cloudflare Workers.</p><p>WebSockets unlock powerful use-cases in your serverless applications — live-updating, interactive experiences that bridge the gap between your users and Workers' powerful network and runtime APIs.</p><p>In this blog post, we’ll walk you through the basics of using WebSockets with Workers. That being said, WebSockets on their own aren’t immediately useful -- to power the interactivity, you need to coordinate a storage layer to go with them. With the addition of Durable Objects as a solution to building coordinated state on the edge, you can combine the power of interactive compute (WebSockets) with coordinated state (Durable Objects), and build incredible real-time applications. Durable Objects was <a href="/durable-objects-open-beta/">released in open beta at the end of last month</a>, and later in this blog post, we’ll explore how Durable Objects are well-suited towards building with WebSockets.</p>
    <div>
      <h2>Getting started with WebSockets in Workers</h2>
      <a href="#getting-started-with-websockets-in-workers">
        
      </a>
    </div>
    <p>WebSockets allow clients to open a connection back to a server (or in our case, Cloudflare Workers) that can receive and send messages. Instead of polling a server continuously for new information, a single WebSocket connection can constantly receive data and unlock the kind of live use-cases that we're used to in modern applications: things like live scores for sports events, real-time chat, and more.</p><p>Let's dig into how to use WebSockets in Workers so you can understand how easy they are to pick up and start using in your application.</p>
    <div>
      <h3>Instantiating WebSocketPairs in Cloudflare Workers</h3>
      <a href="#instantiating-websocketpairs-in-cloudflare-workers">
        
      </a>
    </div>
    <p>Workers respond to HTTP requests sent from clients. A Request comes in, and a Response comes back. You can begin using WebSockets in your Workers code by looking for an Upgrade header in any incoming requests: this is an indication from the client that it's looking to set up a new WebSocket:</p>
            <pre><code>addEventListener('fetch', event =&gt; {
  event.respondWith(handleRequest(event.request))
})

async function handleRequest(request) {
  const upgradeHeader = request.headers.get("Upgrade")
  if (upgradeHeader !== "websocket") {
    return new Response("Expected websocket", { status: 400 })
  }

  // Set up WebSocket
}</code></pre>
            <p>If the Upgrade header is present, we set up a new instance of WebSocketPair — a set of two WebSockets, one for the client, and one for the server. In our code, we'll use the server WebSocket to set up our server-side logic, and return the client socket back to the client. Note the usage of the 101 status code (<a href="https://httpstatuses.com/101">"Switching Protocols"</a>) and the new webSocket property on Response:</p>
            <pre><code>addEventListener('fetch', event =&gt; {
  event.respondWith(handleRequest(event.request))
})

async function handleRequest(request) {
  const upgradeHeader = request.headers.get("Upgrade")
  if (upgradeHeader !== "websocket") {
    return new Response("Expected websocket", { status: 400 })
  }

  const [client, server] = Object.values(new WebSocketPair())
  await handleSession(server)

  return new Response(null, {
    status: 101,
    webSocket: client
  })
}</code></pre>
            <p>In our handleSession function, we call the accept function on our WebSocket. This tells the Workers runtime that it will be responsible for this server WebSocket from the WebSocketPair. We can then handle events on the WebSocket, by using addEventListener and providing callback functions, particularly for message events (when new data comes in), and close events, when the WebSocket connection closes:</p>
            <pre><code>async function handleSession(websocket) {
  websocket.accept()
  websocket.addEventListener("message", async message =&gt; {
    console.log(message)
  })

  websocket.addEventListener("close", async evt =&gt; {
    // Handle when a client closes the WebSocket connection
    console.log(evt)
  })
}</code></pre>
            
    <div>
      <h3>Connecting to WebSockets in a Browser Client</h3>
      <a href="#connecting-to-websockets-in-a-browser-client">
        
      </a>
    </div>
    <p>With WebSockets support added in the Workers function, we can now connect to it from a client. Workers' WebSocket support works directly with the browser-default WebSocket class, meaning that you can connect to it directly in vanilla JavaScript without any additional libraries.</p><p>In fact, this connection process is so simple that it almost explains itself by just looking at the code. Just pass the WebSocket URL to a new instance of WebSocket, and then watch for new events on the WebSocket itself, like open (when the WebSocket connection opens), message (when new data comes in), and close (when the WebSocket connection closes):</p>
            <pre><code>let websocket = new WebSocket(url)
if (!websocket) {
  throw new Error("Server didn't accept WebSocket")
}

websocket.addEventListener("open", () =&gt; {
  console.log('Opened websocket')
})

websocket.addEventListener("message", message =&gt; {
  console.log(message)
})

websocket.addEventListener(“close”, message =&gt; {
  console.log(‘Closed websocket’)
})

websocket.addEventListener(“error”, message =&gt; {
  console.log(‘Something went wrong with the WebSocket’)

  // Potentially reconnect the WebSocket connection, by instantiating a
  // new WebSocket as seen above, and connecting new events
  // websocket = new WebSocket(url)
  // websocket.addEventListener(...)
})

// Close WebSocket connection at a later point
const closeConnection = () =&gt; websocket.close()</code></pre>
            
    <div>
      <h2>Durable Objects and WebSockets</h2>
      <a href="#durable-objects-and-websockets">
        
      </a>
    </div>
    <p>WebSockets are a powerful addition to the Workers toolkit, but you'll notice that in the above example, your WebSocket connections are effectively stateless. I can click the "Click me" button a hundred times, send data back and forth in my WebSocket connection, but as soon as I refresh the page, all of that information is gone. How do we provide state for our WebSocket and for our application in general?</p><p>Our solution for this is Durable Objects. Instead of using external state solutions and making requests to origin servers for a database or API provider, Durable Objects provide simple APIs for accessing and updating stateful data directly at the edge, right alongside your serverless functions.</p><p>Durable Objects complements WebSockets perfectly, providing the stateful primitives needed to make WebSockets at the edge uniquely powerful and performant. When we initially announced the Durable Objects private beta, we also previewed WebSockets in Workers for the first time, as part of our live chat demo. That demo, <a href="https://edge-chat-demo.cloudflareworkers.com/">available here</a>, still serves as a great example of a more complex application that can be built entirely on Workers, Durable Objects, and WebSockets.</p>
    <div>
      <h2>Additional resources</h2>
      <a href="#additional-resources">
        
      </a>
    </div>
    <p>With the release of WebSocket support in Workers, we're providing two resources to help you get started.</p><p>First, a new websocket-template that will help you get started with WebSockets on Workers. The template includes a simple HTML page that shows you how to connect to a Workers-based WebSocket, send and receive messages, as well as how to close the connection and handle any unknown messages or data. This template is the logical extension of the code that I shared above, and could be extended for most use-cases with WebSockets in your application.</p><p><a href="https://websocket-template.cloudflare-docs.workers.dev/">You can see a demo version of the project here.</a></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2sdk1ix59XmdNLQcgcqR7W/0853ac10e9e44d7725107db975501057/image1-17.png" />
            
            </figure><p><a href="https://developers.cloudflare.com/workers/learning/using-websockets">We've also released documentation for WebSocket usage in Workers.</a> This includes a new Learning page on working with WebSockets, as well as a collection of reference documentation for <a href="https://developers.cloudflare.com/workers/runtime-apis/websockets">understanding the new WebSocketPair API</a>, and changes to the Response class to allow WebSocket upgrade responses, as seen in the code above.</p>
    <div>
      <h2>Pricing considerations</h2>
      <a href="#pricing-considerations">
        
      </a>
    </div>
    <p>Today, WebSockets incur a request charge when the connection is first established, followed by the underlying Worker’s duration charge as long as the WebSocket is active. There is no per-message fee for WebSockets.</p><p>This means that if you create a Worker on the Workers Unbound plan and then pass-through a WebSocket connection to another server or to a Durable Object, you’ll be billed wall-clock duration for the entire time the connection is open. This may be surprising, since the Worker itself is not participating in the request.</p><p>This is currently due to a limitation in the platform, where the Worker passing the WebSocket through remains active for the duration of the WebSocket connection. We’re working to make it so that you will not be charged duration time for proxying a WebSocket in the near future.</p><p>Until we make this fix, if you want to use WebSockets with Durable Objects, we recommend using Workers Bundled rather than Unbound to pass the WebSocket connection to the Durable Object to avoid surprise charges. You should not use Workers Unbound on a Worker that passes on a WebSocket connection to Durable Objects.</p><p>Today, while in beta, Durable Objects are not being billed, so there is no cost for terminating the WebSocket connection in a Durable Object.</p><p>We’re currently working on the best way to price WebSockets that are terminated in a Durable Object.</p><p>Our current thinking is that when using WebSockets, you'll be charged for wall clock time whenever a message is being processed by the Durable Object on any WebSocket connection - this charge would be shared across all WebSockets connected to a given Durable Object. When there is no CPU activity on a Durable Object, but any number of WebSocket connections remain established, you'll be billed a much lower active connection charge, per second.</p><p>We want your feedback on how this pricing would affect your usage of Workers! Send the <a href="#">Workers Product</a> team your thoughts on how we could improve your WebSockets experience, particularly on pricing.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[JAMstack]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">3J6zqhA5gl24bC864TCSIy</guid>
            <dc:creator>Kristian Freeman</dc:creator>
        </item>
        <item>
            <title><![CDATA[Building Black Friday e-commerce experiences with JAMstack and Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/building-black-friday-e-commerce-experiences-with-jamstack-and-cloudflare-workers/</link>
            <pubDate>Thu, 26 Nov 2020 18:03:23 GMT</pubDate>
            <description><![CDATA[ Cloudflare Workers continues to excel as a JAMstack deployment platform, and be used to power e-commerce experiences, integrating with familiar tools like Stripe, Nuxt.js, and Sanity.io. ]]></description>
            <content:encoded><![CDATA[ <p>The idea of serverless is to allow developers to focus on writing code rather than operations — the hardest of which is scaling applications. A predictably great deal of traffic that flows through Cloudflare's network every year is Black Friday. As John wrote at the end of last year, <a href="/this-holidays-biggest-online-shopping-day-was-black-friday/">Black Friday is the Internet's biggest online shopping day</a>. In a past <a href="https://www.cloudflare.com/case-studies/cordial-workers-black-friday">case study</a>, we talked about how Cordial, a marketing automation platform, used Cloudflare Workers to <a href="https://www.cloudflare.com/solutions/ecommerce/optimization/">reduce their API server latency</a> and handle the busiest shopping day of the year without breaking a sweat.</p><p>The ability to handle immense scale is well-trodden territory for us on the Cloudflare blog, but scale is not always the first thing developers think about when building an application — developer experience is likely to come first. And developer experience is something Workers does just as well; through Wrangler and APIs like Workers KV, Workers is an awesome place to hack on new projects.</p><p>Over the past few weeks, I've been working on a sample <a href="https://github.com/signalnerve/ecommerce-bundles-workers-example">open-source</a> e-commerce app for selling software, educational products, and bundles. Inspired by Humble Bundle, it's built entirely on Workers, and it integrates powerfully with all kinds of first-class modern tooling: <a href="https://stripe.com/">Stripe</a>, an API for accepting payments (both <i>from</i> customers and <i>to</i> authors, as we’ll see later), and <a href="https://www.sanity.io/">Sanity.io</a>, a headless CMS for data management.</p><p>This kind of project is perfectly suited for Workers. We can lean into Workers as a <a href="https://www.cloudflare.com/developer-platform/solutions/hosting/">static site hosting platform</a> (via <a href="https://workers.cloudflare.com/sites">Workers Sites</a>), API server, and webhook consumer, all within a single codebase, and deployed instantly around the world on Cloudflare's network.</p><p>If you want to see a deployed version of this template, check out <a href="https://ecommerce-example.signalnerve.workers.dev/">ecommerce-example.signalnerve.workers.dev</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/74ASlUQIruTiQ3vb7IofzP/d92115dd93a237a2a383c7ec4df6a31e/image2-14.png" />
            
            </figure><p><i>The frontend of the e-commerce Workers template.</i></p><p>In this blog post, I'll dive deeper into the implementation details of the site, covering how Workers <a href="/jamstack-at-the-edge-how-we-built-built-with-workers-on-workers/">continues to excel as a JAMstack deployment platform</a>. I’ll also cover some new territory in integrating Workers with Stripe. The project is <a href="https://github.com/cloudflare/ecommerce-bundles-workers-example">open-source on GitHub</a>, and I'm actively working on improving the documentation, so that you can take the codebase and build on it for your own <a href="https://www.cloudflare.com/ecommerce/">e-commerce sites</a> and use cases.</p>
    <div>
      <h3>The frontend</h3>
      <a href="#the-frontend">
        
      </a>
    </div>
    <p>As I wrote last year, Workers continues to be an <a href="/jamstack-at-the-edge-how-we-built-built-with-workers-on-workers/">amazing platform for JAMstack apps</a>. When I started building this template, I wanted to use some things I already knew — Sanity.io for managing data, and of course, Workers Sites for deploying — but some new tools as well.</p><p>Workers Sites is incredibly simple to use: just point it at a directory of static assets, and you're good to go. With this project, I decided to try out <a href="https://nuxtjs.org/">Nuxt.js</a>, a Vue-based static site generator, to power the frontend for the application.</p><p>Using Sanity.io, the data representing the bundles (and the products inside of those bundles) is stored on Sanity.io's own CDN, and retrieved client-side by the Nuxt.js application.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/hCZNF9X1W2MNRgsSEJMbl/524eaed1ef2c971d9d5486227a9fc74b/image3-18.png" />
            
            </figure><p><i>Managing data inside Sanity.io’s headless CMS interface.</i></p><p>When a potential customer visits a bundle, they'll see a list of products from Sanity.io, and a checkout button provided by Stripe.</p>
    <div>
      <h3>Responding to new checkout sessions and purchases</h3>
      <a href="#responding-to-new-checkout-sessions-and-purchases">
        
      </a>
    </div>
    <p>Making API requests with Stripe's Node SDK isn't currently supported in Workers (check out the GitHub issue where we're <a href="https://github.com/stripe/stripe-node/issues/997">discussing a fix</a>), but because it's just REST underneath, we can easily make REST requests using the library.</p><p>When a user clicks the checkout button on a bundle page, it makes a request to the Cloudflare Workers API, and securely generates a new session for the user to checkout with Stripe.</p>
            <pre><code>import { json, stripe } from '../helpers'

export default async (request) =&gt; {
  const body = await request.json()
  const { price_id } = body

  const session = await stripe('/checkout/sessions', {
    payment_method_types: ['card'],
    line_items: [{
        price: price_id,
        quantity: 1,
      }],
    mode: 'payment'
  }, 'POST')

  return json({ session_id: session.id })
}</code></pre>
            <p>This is where Workers excels as a <a href="https://www.cloudflare.com/the-net/jamstack-websites/">JAMstack platform</a>. Yes, it can do static site hosting, but with just a few extra lines of routing code, I can deploy a highly scalable API <b>right alongside</b> my Nuxt.js application.</p>
    <div>
      <h3>Webhooks and working with external services</h3>
      <a href="#webhooks-and-working-with-external-services">
        
      </a>
    </div>
    <p>This idea extends throughout the rest of the checkout process. When a customer is successfully charged for their purchase, Stripe sends a webhook back to Cloudflare Workers. In order to complete the transaction on our end, the Workers application:</p><ul><li><p><b>Validates the incoming data from Stripe to ensure that it’s legitimate</b>. This means that every incoming webhook request is explicitly validated using your Stripe account details, and can be confirmed to be valid before the function acts on it.</p></li><li><p><b>Distributes payments to the authors using Stripe Connect</b>. When a customer buys a bundle for $20, that $20 (minus Stripe fees) gets distributed evenly between the authors in that bundle — all of this calculation and the associated transfer requests happen inside the Worker.</p></li><li><p><b>Sends a unique download link to the customer</b>. Using Workers KV, a unique token is set up that corresponds to the customer's email, which can be used to retrieve the content the customer purchased. This integration uses Mailgun to construct an email and send it entirely over REST APIs.</p></li></ul><p>By the time the purchase is complete, the Workers serverless API will have interfaced with four distinct APIs, persisting records, sending emails, and handling and distributing payments to everyone involved in the e-commerce transaction. With Workers, this all happens in a single codebase, with low latency and a superb developer experience. The entire API is type-checked and validated before it ever gets shipped to production, thanks to our <a href="https://github.com/cloudflare/worker-typescript-template">TypeScript template</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/28P0T9V4Vr2rUjFGafTKbg/8f3cca1ca86cd49a0e4360cbfb52af79/image1-19.png" />
            
            </figure><p>Each of these tasks involves a pretty serious level of complexity, but by using Workers, we can abstract each of them into smaller pieces of functionality, and compose powerful, on-demand, and infinitely scalable webhooks directly on the serverless edge.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>I'm really excited about the launch of this template and, of course, it wouldn't have been possible to ship something like this in just a few weeks without using Cloudflare Workers. If you're interested in digging into how any of the above stuff works, <a href="https://github.com/cloudflare/ecommerce-bundles-workers-example">check out the project on GitHub</a>!</p><p>With the recent announcement of our <a href="/workers-kv-free-tier/">Workers KV free tier</a>, this project is perfect to fork and build your own e-commerce products with. Let me know what you build and <a href="https://twitter.com/signalnerve">say hi on Twitter</a>!</p> ]]></content:encoded>
            <category><![CDATA[eCommerce]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[JAMstack]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">1Bs9ynEmg6BZJaONeoaoxx</guid>
            <dc:creator>Kristian Freeman</dc:creator>
        </item>
        <item>
            <title><![CDATA[Lessons from a 2020 intern assignment]]></title>
            <link>https://blog.cloudflare.com/lessons-from-the-2020-intern-assignment/</link>
            <pubDate>Tue, 23 Jun 2020 11:00:00 GMT</pubDate>
            <description><![CDATA[ In this blog post, I want to explain the details of the full-stack take home exercise that we sent out to our 2020 internship applicants. ]]></description>
            <content:encoded><![CDATA[ <p>This summer, Cloudflare announced that we were <a href="/cloudflare-doubling-size-of-2020-summer-intern-class/">doubling the size of our Summer 2020 intern class</a>. Like everyone else at Cloudflare, our interns would be working remotely, and due to COVID-19, many companies had significantly reduced their intern class size, or outright cancelled their programs entirely.</p><p>With our announcement came a <i>huge</i> influx of  students interested in coming to Cloudflare. For applicants seeking engineering internships, we opted to create an exercise based on our serverless product <a href="https://workers.cloudflare.com/">Cloudflare Workers</a>. I'm not a huge fan of timed coding exercises, which is a pretty traditional way that companies gauge candidate skill, so when I was asked to help contribute an example project that would be used instead, I was excited to jump on the project. In addition, it was a rare chance to have literally thousands of eager pairs of eyes on Workers, and on <a href="https://developers.cloudflare.com/workers/">our documentation</a>, a project that I've been working on daily since I started at Cloudflare over a year ago.</p><p>In this blog post, I will explain the details of the full-stack take home exercise that we sent out to our 2020 internship applicants. We asked participants to spend no more than an afternoon working on it, and because it was a take home project, developers were able to look at documentation, copy-paste code, and generally solve it however they would like. I'll show <i>how</i> to solve the project, as well as some common mistakes and some of the implementations that came from reviewing submissions. If you're interested in checking out the exercise, or want to attempt it yourself, <a href="https://github.com/cloudflare-internship-2020/internship-application-fullstack">the code is open-source on GitHub</a>. Note that applications for our internship program this year are closed, but it's still a fun exercise, and if you're interested in Cloudflare Workers, you should give it a shot!</p>
    <div>
      <h3>What the project was: A/B Test Application</h3>
      <a href="#what-the-project-was-a-b-test-application">
        
      </a>
    </div>
    <p>Workers as a serverless platform excels at many different use-cases. For example, using the Workers runtime APIs, developers can directly generate responses and return them to the client: this is usually called an <i>originless</i> application. You can also make requests to an existing origin and enhance or alter the request or response in some way, this is known as an <i>edge</i> application.</p><p>In this exercise, we opted to have our applicants build an A/B test application, where the Workers code should make a request to an API, and return the response of one of two URLs. Because the application doesn’t make request to an origin, but serves a response (potentially with some modifications) from an API, it can be thought of as an originless application – everything is served from Workers.</p>
            <pre><code>Client &lt;-----&gt; Workers application &lt;-------&gt; API
                                   |-------&gt; Route A
                                   |-------&gt; Route B</code></pre>
            <p>A/B testing is just one of many potential things you can do with Workers. By picking something seemingly “simple”, we can hone in on how each applicant used the Workers APIs – making requests, parsing and modifying responses – as well as deploying their app using our command-line tool <a href="https://github.com/cloudflare/wrangler">wrangler</a>. In addition, because Workers can do all these things directly on the edge, it meant that we could provide a <i>self-contained</i> exercise. It felt unfair to ask applicants to spin up their own servers, or host files in some service. As I learned during this process, Cloudflare Workers projects can be a great way to gauge experience in take home projects, without the usual deployment headaches!</p><p>To provide a foundation for the project, I created my own Workers application with three routes - first, an API route that returns an array with two URLs, and two HTML pages, each slightly different from the other (referred to as "variants").</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3ZPcrvN6DSVsRZSo5up8xn/1e050c43451434f11eded44b578ff327/image1-8.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5tvja2pCb0mBy0p40AbhtK/09562165754abcd9af8946b4c635791c/image2-7.png" />
            
            </figure><p>With the API in place, the exercise could be completed with the following steps:</p><ol><li><p>Make a fetch request to the API URL (provided in the instructions)</p></li><li><p>Parse the response from the API and transform it into JSON</p></li><li><p>Randomly pick one of the two URLs from the array <code>variants</code> inside of the JSON response</p></li><li><p>Make a request to that URL, and return the response back from the Workers application to the client</p></li></ol><p>The exercise was designed specifically to be a little past beginner JavaScript. If you know JavaScript and have worked on web applications, a lot of this stuff, such as making fetch requests, getting JSON responses, and randomly picking values in an array, should be things you're familiar with, or have at least seen before. Again, remember that this exercise was a take-home test: applicants could look up code, read <a href="https://developers.cloudflare.com/workers/">the Workers documentation</a>, and find the solution to the problem in whatever way they could. However, because there was an external API, and the variant URLs weren't explicitly mentioned in the prompt for the exercise, you still would need to correctly implement the fetch request and API response parsing in order to give a correct solution to the project.</p><p>Here's one solution:</p>
            <pre><code>addEventListener('fetch', (event) =&gt; {
  event.respondWith(handleRequest(event.request))
})


// URL returning a JSON response with variant URLs, in the format
//   { variants: [url1, url2] }
const apiUrl = `https://cfw-takehome.developers.workers.dev/api/variants`


const random = array =&gt; array[Math.floor(Math.random() * array.length)]


async function handleRequest(request) {
  const apiResp = await fetch(apiUrl)
  const { variants } = await apiResp.json()
  const url = random(variants)
  return fetch(url)
}</code></pre>
            <p>When an applicant completed the exercise, they needed to use wrangler to deploy their project to a registered <a href="/announcing-workers-dev/">Workers.dev subdomain</a>. This falls under the free tier of Workers, so it was a great way to get people exploring <a href="https://github.com/cloudflare/wrangler">wrangler</a>, our documentation, and the deploy process. We saw a number of GitHub issues filed on our docs and in the wrangler repo from people attempting to install wrangler and deploy their code, so it was great feedback on a number of things across the Workers ecosystem!</p>
    <div>
      <h3>Extra credit: using the Workers APIs</h3>
      <a href="#extra-credit-using-the-workers-apis">
        
      </a>
    </div>
    <p>In addition to the main portion of the exercise, I added a few extra credit sections to the project. These were explicitly not required to submit the project (though the existence <i>of</i> the extra credit had an impact on submissions: see the next section of the blog post), but if you were able to quickly finish the initial part of the exercise, you could dive deeper into some more advanced topics (and advanced Workers runtime APIs) to build a more interesting submission.</p>
    <div>
      <h3>Changing contents on the page</h3>
      <a href="#changing-contents-on-the-page">
        
      </a>
    </div>
    <p>With the variant responses being returned to the client, the first extra credit portion asked developers to replace the content on the page using Workers APIs. This could be done in two ways: simple text replacement, or using the <a href="https://developers.cloudflare.com/workers/reference/apis/html-rewriter/">HTMLRewriter API</a> built into the Workers runtime.</p><p>JavaScript has a string <code>.replace</code> function like most programming languages, and for simple substitutions, you could use it inside of the Worker to replace pieces of text inside of the response body:</p>
            <pre><code>// Rewrite using simple text replacement - this example modifies the CTA button
async function handleRequestWithTextReplacement(request) {
  const apiResponse = await fetch(apiUrl)
  const { variants } = await apiResponse.json()
  const url = random(variants)
  const response = await fetch(url)


  // Get the response as a text string
  const text = await response.text()


  // Replace the Cloudflare URL string and CTA text
  const replacedCtaText = text
    .replace('https://cloudflare.com', 'https://workers.cloudflare.com')
    .replace('Return to cloudflare.com', 'Return to Cloudflare Workers')
  return new Response(replacedCtaText, response)
}</code></pre>
            <p>If you’ve used string replacement at scale, on larger applications, you know that it can be fragile. The strings have to match <i>exactly</i>, and on a more technical level, reading <code>response.text()</code> into a variable means that Workers has to hold the entire response in memory. This problem is common when writing Workers applications, so in this exercise, we wanted to push people towards trying our runtime solution to this problem: the HTMLRewriter API.</p><p>The HTMLRewriter API provides a streaming selector-based interface for modifying a response as it passes through a Workers application. In addition, the API also allows developers to compose handlers to modify parts of the response using JavaScript classes or functions, so it can be a good way to test how people write JavaScript and their understanding of APIs. In the below example, we set up a new instance of the HTMLRewriter, and rewrite the <code>title</code> tag, as well as three pieces of content on the site: <code>h1#title</code>, <code>p#description</code>, and <code>a#url</code>:</p>
            <pre><code>// Rewrite text/URLs on screen with HTML Rewriter
async function handleRequestWithRewrite(request) {
  const apiResponse = await fetch(apiUrl)
  const { variants } = await apiResponse.json()
  const url = random(variants)
  const response = await fetch(url)


  // A collection of handlers for rewriting text and attributes
  // using the HTMLRewriter
  //
  // https://developers.cloudflare.com/workers/reference/apis/html-rewriter/#handlers
  const titleRewriter = {
    element: (element) =&gt; {
      element.setInnerContent('My Cool Application')
    },
  }
  const headerRewriter = {
    element: (element) =&gt; {
      element.setInnerContent('My Cool Application')
    },
  }
  const descriptionRewriter = {
    element: (element) =&gt; {
      element.setInnerContent(
        'This is the replaced description of my cool project, using HTMLRewriter',
      )
    },
  }
  const urlRewriter = {
    element: (element) =&gt; {
      element.setAttribute('href', 'https://workers.cloudflare.com')
      element.setInnerContent('Return to Cloudflare Workers')
    },
  }

  // Create a new HTMLRewriter and attach handlers for title, h1#title,
  // p#description, and a#url.
  const rewriter = new HTMLRewriter()
    .on('title', titleRewriter)
    .on('h1#title', headerRewriter)
    .on('p#description', descriptionRewriter)
    .on('a#url', urlRewriter)


  // Pass the variant response through the HTMLRewriter while sending it
  // back to the client.
  return rewriter.transform(response)
}</code></pre>
            
    <div>
      <h3>Persisting variants</h3>
      <a href="#persisting-variants">
        
      </a>
    </div>
    <p>A traditional A/B test application isn't as simple as randomly sending users to different URLs: for it to work correctly, it should also <i>persist</i> a chosen URL per-user. This means that when User A is sent to variant A, they should continue to see Variant A in subsequent visits. In this portion of the extra credit, applicants were encouraged to use Workers' close integration with the <code>Request</code> and <code>Response</code> classes to persist a cookie for the user, which can be parsed in subsequent requests to indicate a specific variant to be returned.</p><p>This exercise is dear to my heart, because surprisingly, I had no idea how to implement cookies before this year! I hadn't worked with request/response behavior as closely as I do with the Workers API in my past programming experience, so it seemed like a good challenge to encourage developers to check out our documentation, and wrap their head around how a crucial part of the web works! Below is an example implementation for persisting a variant using cookies:</p>
            <pre><code>// Persist sessions with a cookie
async function handleRequestWithPersistence(request) {
  let url, resp
  const cookieHeader = request.headers.get('Cookie')

  // If a Variant field is already set on the cookie...
  if (cookieHeader &amp;&amp; cookieHeader.includes('Variant')) {
    // Parse the URL from it using regexp
    url = cookieHeader.match(/Variant=(.*)/)[1]
    // and return it to the client
    return fetch(url)
  } else {
    const apiResponse = await fetch(apiUrl)
    const { variants } = await apiResponse.json()
    url = random(variants)
    response = await fetch(url)

    // If the cookie isn't set, create a new Response
    // passing in all the information from the original response,
    // along with a Set-cookie header, setting the value `Variant`
    // to the randomly selected variant URL.
    return new Response(response.body, {
      ...resp,
      headers: {
        'Set-cookie': `Variant=${url}`,
      },
    })
  }
}</code></pre>
            
    <div>
      <h3>Deploying to a domain</h3>
      <a href="#deploying-to-a-domain">
        
      </a>
    </div>
    <p>Workers makes a great platform for these take home-style projects because the existence of <a href="https://workers.dev/">workers.dev</a> and the ability to claim your workers.dev subdomain means you can deploy your Workers application without needing to own any domains. That being said, wrangler and Workers do have the ability to deploy to a domain, so for another piece of extra credit, applicants were encouraged to deploy their project to a domain that they owned! We were careful here to tell people <i>not</i> to buy a domain for this project: that's a potential financial burden that we don't want to put on anyone (especially interns), but for many web developers, they may already have test domains or even subdomains they could deploy their project to.</p><p>This extra credit section is particularly useful because it also gives developers a chance to dig into other Cloudflare features outside of Workers. Because deploying your Workers application to a domain requires that it be set up as a zone in the Cloudflare Dashboard, it's a great opportunity for interns to familiarize themselves with our onboarding process as they go through the exercise.</p><p>You can see an example Workers application deploy to a domain, as indicated by the <code>wrangler.toml</code> configuration file used to deploy the project:</p>
            <pre><code>name = "my-fullstack-example"
type = "webpack"
account_id = "0a1f7e807cfb0a78bec5123ff1d3"
zone_id = "9f7e1af6b59f99f2fa4478a159a4"</code></pre>
            
    <div>
      <h3>Where people went wrong</h3>
      <a href="#where-people-went-wrong">
        
      </a>
    </div>
    <p>By far the place where applicants struggled the most was in writing <i>clean code</i>. While we didn't evaluate submissions against a style guide, most people would have benefitted strongly from running their code through a "code prettifier": this could have been as simple as opening the file in VS Code or something similar, and using the "Format Document" option. Consistent indentation and similar "readability" problems made some submissions, even though they were technically correct, very hard to read!</p><p>In addition, there were many applicants who dove directly into the extra credit, without making sure that the base implementation was working correctly. Opening the API URL in-browser, copying one of the two variant URLs, and hard-coding it into the application isn't a valid solution to the exercise, but with that implementation in place, going and implementing the HTMLRewriter/content-rewriting aspect of the exercise makes it a pretty clear case of rushing! As I reviewed submissions, I found that this happened <i>a ton</i>, and it was a bummer to mark people down for incorrect implementations when it was clear that they were eager enough to approach some of the more complex aspects of the exercise.</p><p>On the topic of incorrect implementations, the most common mistake was misunderstanding or incorrectly implementing the solution to the exercise. A common version of this was hard-coding URLs as I mentioned above, but I also saw people copying the entire JSON array, misunderstanding how to randomly pick between two values in the array, or not preparing for a circumstance in which a <i>third</i> value could be added to that array. In addition, the second most common mistake around implementation was excessive bandwidth usage: instead of looking at the JSON response and picking a URL <i>before</i> fetching it, many people opted to get <i>both</i> URLs, and then return one of the two responses to the user. In a small serverless application, this isn't a huge deal, but in a larger application, excessive bandwidth usage or being wasteful with request time can be a huge problem!</p>
    <div>
      <h3>Finding the solution and next steps</h3>
      <a href="#finding-the-solution-and-next-steps">
        
      </a>
    </div>
    <p>If you're interested in checking out more about the fullstack example exercise we gave to our intern applicants this year, check out the source on GitHub: <a href="https://github.com/cloudflare-internship-2020/internship-application-fullstack">https://github.com/cloudflare-internship-2020/internship-application-fullstack</a>.</p><p>If you tried the exercise and want to build more stuff with Cloudflare Workers, check out our docs! We have tons of tutorials and templates available to help you get up and running: <a href="https://workers.cloudflare.com/docs">https://workers.cloudflare.com/docs</a>.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Careers]]></category>
            <category><![CDATA[Internship Experience]]></category>
            <guid isPermaLink="false">7ueF5n9uFL4awxoTD2owZ2</guid>
            <dc:creator>Kristian Freeman</dc:creator>
        </item>
        <item>
            <title><![CDATA[JAMstack at the Edge: How we built Built with Workers… on Workers]]></title>
            <link>https://blog.cloudflare.com/jamstack-at-the-edge-how-we-built-built-with-workers-on-workers/</link>
            <pubDate>Wed, 29 Jan 2020 13:00:00 GMT</pubDate>
            <description><![CDATA[ Announcing Built with Workers – a resource for exploring what you can build with Cloudflare Workers. This resource showcases developers building incredible projects with tools like Workers KV  ]]></description>
            <content:encoded><![CDATA[ <p>I'm extremely stoked to announce <a href="https://workers.cloudflare.com/built-with">Built with Workers</a> today – it's an awesome resource for exploring what you can build with <a href="https://workers.cloudflare.com/">Cloudflare Workers</a>. As Adam explained in <a href="/built-with-workers/">our launch post</a>, showcasing developers building incredible projects with tools like Workers KV or our <a href="https://developers.cloudflare.com/workers/reference/apis/html-rewriter/">streaming HTML rewriter</a> is a great way to celebrate users of our platform. It also helps encourage developers to try building their dream app on top of Workers. In this post, I’ll explore some of the architectural and implementation designs we made while building the site.</p><p>When we first started planning Built with Workers, we wanted to use the site as an opportunity to build a new greenfield application, showcasing the strength of the Workers platform. The Workers Developer Experience team is cross-functional: while we might spend most of our time improving our docs, or developing features for our command-line interface <a href="https://github.com/cloudflare/wrangler">Wrangler,</a> most of us have spent years developing on the web. The prospect of starting a new application is always fun, but in this instance, it was a prime chance to ask (and answer) the question, <i>"If I could build this site on Workers with whatever tools I want, what would I choose?"</i></p><p>A guiding principle for the Workers platform is ease-of-use. The programming model is simple: it's just JavaScript (or, via WASM, Rust, C, and C++), and you have complete control over the requests coming in and the requests going out from your Workers script. In the same way, while building Built with Workers, it was <b>crucial</b> to find a set of tools that could enable something like this throughout the process of building the entire application. To enable this, we've embraced <a href="https://www.cloudflare.com/the-net/jamstack-websites/"><b>JAMstack</b></a> – a software stack comprised of JavaScript, <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api/">APIs</a>, and markup – with Built with Workers, deploying always up-to-date static builds of the site <i>directly to the edge</i>, using <a href="https://workers.cloudflare.com/sites">Workers Sites</a>. Our framework of choice, <a href="https://www.gatsbyjs.org/">Gatsby.js</a>), provides a set of sane defaults to build a modern web application. To manage content and the layout of the site, we've chosen <a href="https://www.sanity.io/">Sanity.io</a>, a powerful <a href="https://en.wikipedia.org/wiki/Headless_content_management_system">headless CMS</a> that allows us to model the entire website without needing to deploy any databases or spin up any additional infrastructure.</p><p>Personally, I'm excited about JAMstack as a methodology for building web applications <i>because</i> of this emphasis on reducing infrastructure: it's incredibly similar to the motivations behind deploying serverless applications using Cloudflare Workers, and as we developed Built with Workers, we discovered a number of these philosophical similarities in JAMstack and Cloudflare Workers – exciting! To help encourage developers to explore building their own JAMstack applications on Workers, I'm also announcing today that we've made the Built with Workers codebase open-source on GitHub – you can check out how the application is developed, built and deployed from start to finish.</p><p>In this post, we'll dig into Built with Workers, exploring how it works, the technical decisions we've made, and some of the most fascinating aspects of what it means to build applications on the edge.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1umidSCuFOnPav3ZdYthdC/31579479c96cf58508268af85780f38e/bww.png" />
            
            </figure><p>A screenshot of the Built with Workers homepage</p>
    <div>
      <h2>Exploring the JAMstack</h2>
      <a href="#exploring-the-jamstack">
        
      </a>
    </div>
    <p>My first encounter with tooling that would ultimately become part of "JAMstack" was in 2013. I noticed the huge proliferation of developers building personal "static" sites – taking blog posts written primarily in <a href="https://blog.cloudflare.com/markdown-for-agents/">Markdown</a>, and pushing them through frameworks like Jekyll to build full websites that could easily be deployed to a number of <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">CDNs</a> and file hosting platforms. These static sites were <b>fast</b> – they are just HTML, CSS, and JavaScript – and <b>easy to update</b>. The average developer spends their days maintaining large and complex software systems, so it was relaxing to just write Markdown, plug in some re-usable HTML and CSS, and deploy your website. The advent of static sites, of course, isn't new – but after years of increasingly complex full-stack technology, the return to simplicity was a promising development for many kinds of websites that didn't need databases, or any dynamic content.</p><p>In the last couple years, JAMstack has built upon that resurgence, and represents an approach to building complete, complex applications using the same tooling that has become the first choice for developers building their simple personal sites. JAMstack is comprised of three conceptual pieces – <i>JavaScript</i>, <i>APIs</i>, and <i>Markup</i> – each of which is a crucial aspect of simplifying our web applications and making them easy to write, build, and deploy.</p>
    <div>
      <h3>J is for JavaScript</h3>
      <a href="#j-is-for-javascript">
        
      </a>
    </div>
    <p>The JAMstack architecture relies heavily on the ubiquity of JavaScript as the language of the web. Many modern web applications use powerful, dynamic front-end frameworks like <i>React</i> and <i>Vue</i> to render user interfaces and process state on the client for users. On the backend, or in Workers' case, on the edge, any dynamic functionality in your JAMstack application should be built on top of JavaScript, often working in the request-response model that full-stack developers are accustomed to.</p><p>The Workers platform is <b>perfectly</b> suited to this requirement! As a developer building on Workers, you have total control of incoming requests and outgoing responses, using the JavaScript Service Worker APIs you know and love. We built <a href="https://workers.cloudflare.com/sites">Workers Sites</a> as an extension of the Workers platform (and Workers KV as a storage mechanism at the edge), making it possible to deploy your site assets using a single command in Wrangler: <code>wrangler publish</code>.</p><p>When your Workers Site receives a new request, we'll execute JavaScript at the edge to look up a piece of content from Workers KV, and serve it back to the client at lightning speed. Remarkably, you can deploy JAMstack applications on Workers with no additional configuration besides <a href="https://developers.cloudflare.com/workers/sites/start-from-existing">generating your Workers Site</a> – <b>by design, Workers Sites is built to serve as an exceptional JAMstack deployment platform</b>.</p>
    <div>
      <h3>A is for APIs</h3>
      <a href="#a-is-for-apis">
        
      </a>
    </div>
    <p>The advent of static site tooling for personal sites makes sense: your site is a few pages: a few blog posts, for instance, and the classic "About" or "Contact" page. When it's compiled to HTML, the footprint is quite small! This small footprint is what makes static sites easy to reason about: they're trivial to host in terms of bandwidth and storage costs, and they rarely change, so they're easily cacheable.</p><p>While that works for personal sites, complex applications actually have data requirements! We need to talk to the user data in our databases, and analytics information in our data warehouses. JAMstack apps tackle this by definitively stating that these data sources should be accessible via HTTPS APIs, consumable by the application as a way to provide dynamic information to clients.</p><p>Workers is a <b>fascinating</b> platform in regards to JAMstack APIs. It can serve as a gateway to your data, or as a place to persist and return data itself. I can, for instance, expose an API endpoint via my Workers script without giving clients access to my origin. I can also use tooling like <a href="https://www.cloudflare.com/products/workers-kv/">Workers KV</a> to persist data directly on the edge, and when a user requests that data, I can resolve the data by returning JSON directly from my application.</p><p>This flexibility has been an unexpected part of the experience of developing Built with Workers. In a later section of this post, I'll talk about how we developed an integral feature of the site based on the unique strengths of Workers as a way to <a href="https://www.cloudflare.com/developer-platform/solutions/hosting/">host static assets</a> <i>and</i> as a dynamic JavaScript execution platform. This has remarkable implications that blur the lines between classic static sites and dynamic applications, and I'm <b>really</b> excited about it.</p>
    <div>
      <h3>M is for Markup</h3>
      <a href="#m-is-for-markup">
        
      </a>
    </div>
    <p>A breakthrough moment in my understanding of JAMstack came at the beginning of this year. I was working on a job board for frontend developers, using the static site framework Gatsby.js and Sanity.io, a headless CMS tool that allows developers to model content without maintaining a database or any infrastructure. (As a reminder – this set of tools is <i>identical</i> to what we ultimately used to develop Built with Workers. It's a very good stack!)</p><p>SEO is crucial to a job board, and as I began to explore how to drive more traffic to my job board, I landed on the idea of generating a huge amount of search-oriented content automatically from the job data I already had. For instance, if I had job posts with the keywords <i>"React"</i>, <i>"Europe"</i>, and <i>"Senior"</i> (as in <i>"senior developer"</i>), what if I created pages with titles like <i>"Senior React developer jobs in Europe"</i>, or <i>"Remote Angular jobs"</i>? This approach would allow the site to begin ranking for a variety of job positions, locations, and experience levels, and as more jobs were posted on the site, each of these pages would be enriched with more useful information and relevant content, helping it rank higher in search.</p><p><i>"But static sites are... static!"</i>, I told myself. Would I need to build an entire dynamic API on top of my static site, just to be able to serve these search-engine optimized pages? This led me to a "eureka" moment with Gatsby – I could define markup (the <i>"M"</i> in JAMstack), and when I'm building my site, I could look at all the available job data I had, cycling through every available keyword combination and inserting it into my markup to generate thousands of these pages. As I later learned, this idea is not necessarily unique to Gatsby – it is possible, for instance, to automate getting data from your API and writing it to data files in earlier static site frameworks like <a href="https://gohugo.io/">Hugo</a> – but it is a first-class citizen in Gatsby. There are a <i>ton</i> of data sources available via Gatsby plugins, and because they're all exposed via HTTPS, the workflow is standardized inside of the framework.</p><p>In Built with Workers, we connect to the Sanity.io CMS instance at <i>build-time</i>: crucially, by the time that the site has been deployed to Workers, the application effectively has <i>no idea</i> what Sanity even is! Our Gatsby application connects to Sanity.io via an HTTPS API, and using GraphQL, we look at all the data that we have in our CMS, and make decisions about what pages to generate and how to render the site's interface, ultimately resulting in a statically-built application that is derived from dynamic data.</p><p>This emphasis on the <i>build</i> step in JAMstack is quite different than the classic web application. In the past, a user requested data, a web server looked at what the user was requesting, and then the user <i>waits</i>, as the server gets that data, returns JSON, or interpolates it into templates written in tools like <a href="https://pugjs.org/">Pug</a> or <a href="https://en.wikipedia.org/wiki/ERuby">ERB</a>. With JAMstack, the pages are already built, and the deployed application is just a collection of plain HTML, CSS, and JavaScript.</p>
    <div>
      <h3>Why Cloudflare Workers?</h3>
      <a href="#why-cloudflare-workers">
        
      </a>
    </div>
    <p>Cloudflare's network is a fascinating place to deploy JAMstack applications. Yes, Cloudflare's edge network can act as a CDN for your static assets, like your CSS stylesheets, or your client-side JavaScript code. But with Workers, we now have the ability to run JavaScript side-by-side with our static assets. In most JAMstack applications, the CDN is simply a bucket where your application ends up. Usually, the CDN is the most boring part of the stack! <b>With Cloudflare Workers, we don't just have a CDN: we also have access to an extremely low-latency, fully-featured JavaScript runtime.</b></p><p>The implications of this on the standard JAMstack workflow are, frankly, <i>mind-boggling</i>, and as part of developing Built with Workers, we've been exploring what it means to have this runtime available side-by-side with our statically-built JAMstack application.</p><p>To demonstrate this, we’ve implemented a bookmarking feature, which allows users of Built with Workers to bookmark projects. If you look at a project's usage of our streaming HTML rewriter and say <i>"Wow, that's cool!"</i>, you can also bookmark for the project to show your support. This feature, rendered as a <code>button</code> tag is deceptively simple: it's a single piece of the user interface that makes use of the entirety of the Workers platform, to provide user-specific dynamic functionality. We'll explore this in greater detail later in the post – see <i>"Enhancing static sites at the edge"</i>.</p><blockquote><p>i'm super excited about this and spent some time this morning re-writing my first take at this to be something that i think is super compelling</p><p>??<a href="https://t.co/LJltC6j20S">https://t.co/LJltC6j20S</a></p><p>— ً (@signalnerve) <a href="https://twitter.com/signalnerve/status/1199379423608344576?ref_src=twsrc%5Etfw">November 26, 2019</a></p></blockquote>
    <div>
      <h2>A modern development and content workflow</h2>
      <a href="#a-modern-development-and-content-workflow">
        
      </a>
    </div>
    <p>In the <a href="/workers-sites/">announcement post</a> for Workers Sites, Rita outlined the motivations behind launching Workers Sites as a modern way to deploy sites:</p><p><i>"Born on the edge, Workers Sites is what we think modern development on the web should look like, natively secure, fast, and massively scalable. Less of your time is spent on configuration, and more of your time is spent on your code, and content itself."</i></p><p>A few months later, I can say definitively that Workers Sites has enabled us to develop Built with Workers and spend almost no time on configuration. Using <a href="https://github.com/cloudflare/wrangler-action">our GitHub Action</a> for deploying Workers applications with Wrangler, the site has been continuously deploying to a staging environment for the past couple weeks. The simplicity around this continuous deployment workflow has allowed us to focus on the important aspects of the project: development and content.</p><p>The static site framework ecosystem is fairly competitive, but as we considered our options for this site, I advocated strongly for Gatsby.js. It's an incredible tool for building JAMstack applications, with a great set of default for performant applications. It's common to see Gatsby sites with <a href="https://developers.google.com/web/tools/lighthouse/">Lighthouse</a> scores in the upper 90s, and the decision to use React for implementing the UI makes it straightforward to onboard new developers if they're familiar with React.</p><p>As I mentioned in a previous section, Gatsby shines at <i>build-time</i>. Gatsby's APIs for creating pages during the build process based on API data are incredibly powerful, allowing developers to concretely define every statically-generated page on their web application, as well as any relevant data that needs to be passed in.</p><p>With Gatsby decided upon as our static site framework, we needed to evaluate where our content would live. Built with Workers has two primary data models, used to generate the UI:</p><ul><li><p><b>Projects</b>: websites, applications, and APIs created by developers using Cloudflare Workers. For instance, Built with Workers!</p></li><li><p><b>Features</b>: features available on the Workers platform used to build applications. For instance, Workers KV, or our streaming HTML rewriter/parser.</p></li></ul><p>Given these requirements, there were a number of potential approaches to take to store this data, and make it accessible. Keeping in line with JAMstack, we know that we probably should expose it via an HTTPS API, but from where? In what format?</p><p>As a full-stack developer who's comfortable with databases, it's easy to envision a world where we spin up a PostgreSQL instance, write a REST API, and write all kinds of <code>fetch('/api/projects')</code> to get the information we need. This method works, but we can do better! In the same way we built Workers Sites to simplify the deployment process, it was worthwhile to explore the JAMstack ecosystem and see what solutions exist for modeling data without being on the hook for more infrastructure.</p><p>Of the different tools in the ecosystem – databases, whether SQL or NoSQL, key-value stores (such as our own, Workers KV), etc. – the growth of "headless CMS" tools has made the largest impact on my development workflow.</p><p>On CSS Tricks, Chris Coyier wrote about the rise of headless CMS tools back in March 2016, and <a href="https://css-tricks.com/what-is-a-headless-cms/">summarizes their function</a> well:</p><p><i>[Headless CMSes are] very related to The Big Conversation™ on the web the last many years. How are we going to handle bringing Our Stuff™ all these different devices/screens/inputs.Responsive design says "let's let our design and media accommodate as much variation in screens as possible."Progressive enhancement says "let's make the functionality of this site work no matter what."Designing for accessibility says "let's ensure everyone can use this regardless of their capabilities as a person."A headless CMS says "let's not tie our data to any one way of doing things."</i></p><p>Using our headless CMS, Sanity.io, we can get every project inside our dataset, and call Gatsby's <code>createPage</code> function to create a new page for each project, using a pre-defined project template file:</p>
            <pre><code>// gatsby-node.js

exports.createPages = async ({ graphql, actions }) =&gt; {
  const { createPage } = actions;

  const result = await graphql(`
    {
      allSanityProject {
        edges {
          node {
            slug
          }
        }
      }
    }
  `);

  if (result.errors) {
    throw result.errors;
  }

  const {
    data: { allSanityProject }
  } = result;

  const projects = allSanityProject.edges.map(({ node }) =&gt; node);
  projects.forEach((node, _index) =&gt; {
    const path = `/built-with/projects/${node.slug}`;

    createPage({
      path,
      component: require.resolve("./src/templates/project.js"),
      context: { slug: node.slug }
    });
  });
};</code></pre>
            <p>Using Sanity to drive the content for Built with Workers has been a <i>huge</i> win for our team. We're no longer constrained by code deploys to make changes to content on the site – we don't need to make a pull request to create a new project, and edits to a project's name or description aren't constrained by someone with the ability to deploy the project. Instead, we can empower members of our team to log in directly to the CMS and make changes, and be confident that once the corresponding deploy has completed (see <i>"The CDN is the deployment platform"</i> below), their changes will be live on the site.</p>
    <div>
      <h3>Dynamic JAMstack layouts</h3>
      <a href="#dynamic-jamstack-layouts">
        
      </a>
    </div>
    <p>As our team got up and running with Sanity.io, we found that the flexibility of a headless CMS platform was useful not just for creating our original data requirements – projects and features – but in rethinking and innovating on how we actually format the application itself.</p><p>With our previous objective of empowering non-technical folks to make changes to the site without deploying any code in consideration, we've also taken the entire homepage of Built with Workers and defined it as an instance of the "layout" data model in Sanity.io. By doing this, we can define corresponding "collections", which are sets of projects. When a layout has many collections defined inside of the CMS, we can rapidly re-order, re-arrange, and experiment with new collections on the homepage, seeing the updated version of the site reflected immediately, and live on the production site within only a few minutes, after our continuous deployment process has finished.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1x7v7SMVVKcuM48wKvz22T/50e4a31742d85f60e689574f75af1541/bww-demo.gif" />
            
            </figure><p>Updating the layout of Built with Workers live from Sanity's studio</p><p>With this work implemented, it's easy to envision a world where our React code is purely concerned with rendering each individual aspect of the application's interface – for instance, the project title component, or the "card" for an individual project – and the CMS drives the entire layout of the site. In the future, I'd like to continue exploring this idea in other pages in Built with Workers, including the project pages and any other future content we put on the site.</p>
    <div>
      <h2>Enhancing static sites at the edge</h2>
      <a href="#enhancing-static-sites-at-the-edge">
        
      </a>
    </div>
    <p>Much of what we've discussed so far can be thought of as features and workflows that have great DX (developer experience), but not specific to Workers. Gatsby and Sanity.io are great, and although Workers Sites is a great platform for deploying JAMstack applications due to the Workers platform's low-latency and performance characteristics, you could deploy the site to a number of different providers with no real differentiating features.</p><p>As we began building a JAMstack application on top of Built with Workers, we also wanted to explore how the Workers platform could allow developers to combine the simplicity of static site deployments with the dynamism of having a JavaScript runtime immediately available.</p><p>In particular, our recently-released streaming HTML rewriter seems like a perfect fit for "enhancing" our static sites. Our application is being served by Workers Sites, which itself is a Workers template that can be customized. By passing each HTML page through the HTML rewriter on its way to the client, we had an opportunity to customize the content without any negative performance implications.</p><p>As I mentioned previously, we landed on a first exploration of this platform advantage via the "bookmark" button. Users of Built with Workers can "bookmark" for a project – this action sends a request back up to the Workers application, storing the bookmark data as JSON in Workers KV.</p>
            <pre><code>// User-specific data stored in Workers KV, representing
// per-project bookmark information

{
  "bytesized_scraper_bookmarked": false,
  "web_scraper_bookmarked": true
}</code></pre>
            <p>When a user returns to Built with Workers, we can make a request to Workers KV, looking for corresponding data for that user and the project they're currently viewing. If that data exists, we can embed the "edge state" directly into the HTML using the streaming HTML rewriter.</p>
            <pre><code>// workers-site/index.js

import { getAssetFromKV } from "@cloudflare/kv-asset-handler"

addEventListener("fetch", event =&gt; { 
  event.respondWith(handleEvent(event)) 
})

class EdgeStateEmbed {
  constructor(state) {
    this._state = state
  }
  
  element(element) {
    const edgeStateElement = `
      &lt;script id='edge_state' type='application/json'&gt;
        ${JSON.stringify(this._state)}
      &lt;/script&gt;
    `
    element.prepend(edgeStateElement, { html: true })
  }
}

const hydrateEdgeState = async ({ state, response }) =&gt; {
  const rewriter = new HTMLRewriter().on(
    "body",
    new EdgeStateEmbed(await state)
  )
  return rewriter.transform(await response)
}

async function handleEvent(event) {
  return hydrateEdgeState({
    response: getAssetFromKV(event, options),
    // Get associated state for a request, based on the user and URL
    state: transformBookmark(event.request),
  })
}</code></pre>
            <p>When the React application is rendered on the client, it can then check for that embedded edge state, which influences how the "bookmark" icon is rendered - either as "bookmarked", or "bookmarked". To support this, we've leaned on React's <code>useContext</code>, which allows any component inside of the application component tree to pull out the edge state and use it inside of the component:</p>
            <pre><code>// edge_state.js

import React from "react"
import { useSSR } from "../utils"

const parseDocumentState = () =&gt; {
  const edgeStateElement = document.querySelector("#edge_state")
  return edgeStateElement ? JSON.parse(edgeStateElement.innerText) : {}
}

export const EdgeStateContext = React.createContext([{}, () =&gt; {}])
export const EdgeStateProvider = ({ children }) =&gt; {
  const { isBrowser } = useSSR()
  if (!isBrowser) {
    return &lt;&gt;{children}&lt;/&gt;
  }
  
  const edgeState = parseDocumentState()
  const [state, setState] = React.useState(edgeState)
  const updateState = (newState, options = { immutable: true }) =&gt; options.immutable
    ? setState(Object.assign({}, state, newState))
    : setState(newState)
  
  return (
    &lt;EdgeStateContext.Provider value={[state, updateState]}&gt;
      {children}
    &lt;/EdgeStateContext.Provider&gt;
  )
}

// Inside of a React component
const Bookmark = ({ bookmarked, project, setBookmarked, setLoaded }) =&gt; {
const [state, setState] = React.useContext(EdgeStateContext)
// `bookmarked` value is a simplification of actual code
return &lt;BookmarkButton bookmarked={state[project.id]} /&gt;
}</code></pre>
            <p>The combination of a straightforward JAMstack deployment platform with dynamic key-value <a href="https://www.cloudflare.com/learning/cloud/what-is-object-storage/">object storage</a> and a streaming HTML rewriter is really, really cool. This is an initial exploration into what I consider to be <i>a platform-defining feature</i>, and if you're interested in this stuff and want to continue to explore how this will influence how we write web applications, <a href="https://twitter.com/signalnerve">get in touch with me on Twitter</a>!</p>
    <div>
      <h2>The CDN is the deployment platform</h2>
      <a href="#the-cdn-is-the-deployment-platform">
        
      </a>
    </div>
    <p>While it doesn't appear in the acronym, an unsung hero of the JAMstack architecture is <b>deployment</b>. In my local terminal, when I run <code>gatsby build</code> inside of the Built with Workers project, the result is a folder of static HTML, CSS, and JavaScript. <i>It should be easy to deploy!</i></p><p>The recent release of <a href="https://github.com/features/actions">GitHub Actions</a> has proven to be a great companion to building JAMstack applications with Cloudflare Workers – we've open-sourced our own <a href="https://github.com/cloudflare/wrangler-action">wrangler-action</a>, which allows developers to build their Workers applications and deploy directly from GitHub.</p><p>The standard workflows in the continuous deployment world – deploy every hour, deploy on new changes to the <code>master</code> branch, etc – are possible and already being used by many developers who make use of our <code>wrangler-action</code> workflow in their projects. Particular to JAMstack and to headless CMS tools is the idea of "build-on-change": namely, when someone publishes a change in Sanity.io, we want to do a new deploy of the site to immediately reflect our new content in production.</p><p>The versatility of Workers as a place to deploy JavaScript code comes to the rescue, again! By telling Sanity.io to make a GET request to a deployed Workers webhook, we can trigger a <code>repository_event</code> on GitHub Actions for our repository, allowing new deploys to happen immediately after a change is detected in the CMS:</p>
            <pre><code>const headers = {
  Accept: 'application/vnd.github.everest-preview+json',
  Authorization: 'Bearer $token',
}

const body = JSON.stringify({ event_type: 'repository_dispatch' })

const url = `https://api.github.com/repos/cloudflare/built-with-workers/dispatches`

const handleRequest = async evt =&gt; {
  await fetch(url, { method: 'POST', headers, body })
  return new Response('OK')
}

addEventListener('fetch', handleRequest)</code></pre>
            <p>In doing this, we've made it possible to completely abstract away <i>every</i> deployment task around the Built with Workers project. Not only does the site deploy on a schedule, and on new commits to <code>master</code>, but it can also do additional deploys as the content changes, so that the site is always reflective of the current content in our CMS.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2CR3w7eC9bk2QZXFlirn1o/fea6c2fe3b3e137c1b78aacabba44d4c/i6sG6fa_qL9h3uHrSQWI5adnC41xFKaS3vwBIKqk_WqqjSzQ_qZhT5VGnMo8DTMyfyqc8YiPUgTiw77EzopT9cyVWUn1HSrVxsFbKCPHkI-MVtbzTxLJZH8KbYrw.png" />
            
            </figure><p>The GitHub Actions deployment workflow for Built with Workers</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>We're <b>super</b> excited about Built with Workers, not only because it will serve as a great place to showcase the incredible things people are building with the Cloudflare Workers platform, but because it also has allowed us to explore what the future of web development may look like. I've been advocating for what I've seen referred to as <a href="https://www.bytesized.xyz/full-stack-serverless">"full-stack serverless"</a> development throughout 2019, and I couldn't be happier to start 2020 with launching a project like Built with Workers. The full-stack serverless stack feels like the future, and it's actually fun to build with on a daily basis!</p><p>If you're building something awesome with Cloudflare Workers, we're looking for submissions to the site! Get in touch with us via <a href="https://forms.gle/k6fZaXJbrUygvZhp6">this form</a> – we're excited to speak with you!</p><p>Finally, if topics like JAMstack on Cloudflare Workers, "edge state" and dynamic static site hydration, or continuous deployment interest you, the Built with Workers repository is open-source! <a href="https://github.com/cloudflare/built-with-workers">Check it out</a>, and if you're inspired to build something cool with Workers after checking out the code, make sure to let us know!</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[JAMstack]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">6tUTWYaVjn742TLZMm1TXo</guid>
            <dc:creator>Kristian Freeman</dc:creator>
        </item>
        <item>
            <title><![CDATA[Building a GraphQL server on the edge with Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/building-a-graphql-server-on-the-edge-with-cloudflare-workers/</link>
            <pubDate>Wed, 14 Aug 2019 03:21:00 GMT</pubDate>
            <description><![CDATA[ Today, we're open-sourcing an exciting project that showcases the strengths of our Cloudflare Workers platform: workers-graphql-server is a batteries-included Apollo GraphQL server, designed to get you up and running quickly with GraphQL. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today, we're open-sourcing an exciting project that showcases the strengths of our Cloudflare Workers platform: <code>workers-graphql-server</code> is a batteries-included <a href="https://apollographql.com">Apollo GraphQL</a> server, designed to get you up and running quickly with <a href="https://graphql.com">GraphQL</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1AJH4z33GBgzCVmgEAuaSQ/e7643db81ee32fef75113d0bc22df3e8/Screen-Shot-2019-08-14-at-11.05.06-AM.png" />
            
            </figure><p>Testing GraphQL queries in the GraphQL Playground</p><p>As a full-stack developer, I’m really excited about GraphQL. I love building user interfaces with <a href="https://reactjs.org">React</a>, but as a project gets more complex, it can become really difficult to manage how your data is managed inside an application. GraphQL makes that really easy — instead of having to recall the REST URL structure of your backend API, or remember when your backend server doesn't <i>quite</i> follow REST conventions — you just tell GraphQL what data you want, and it takes care of the rest.</p><p>Cloudflare Workers is uniquely suited as a platform to being an incredible place to <a href="https://www.cloudflare.com/developer-platform/solutions/hosting/">host</a> a GraphQL server. Because your code is running on Cloudflare's servers around the world, the average latency for your requests is extremely low, and by using <a href="https://github.com/cloudflare/wrangler">Wrangler</a>, our open-source command line tool for building and managing Workers projects, you can deploy new versions of your GraphQL server around the world within seconds.</p><p>If you'd like to try the GraphQL server, check out a demo GraphQL playground, <a href="https://graphql-on-workers.signalnerve.com/___graphql">deployed on Workers.dev</a>. This optional add-on to the GraphQL server allows you to experiment with GraphQL queries and mutations, giving you a super powerful way to understand how to interface with your data, without having to hop into a codebase.</p><p>If you're ready to get started building your own GraphQL server with our new open-source project, we've added a new tutorial to our <a href="https://workers.cloudflare.com/docs">Workers documentation</a> to help you get up and running — <a href="https://developers.cloudflare.com/workers/get-started/quickstarts#frameworks">check it out here</a>!</p><p>Finally, if you're interested in <i>how</i> the project works, or want to help contribute — it's open-source! We'd love to hear your feedback and see your contributions. Check out the project <a href="https://github.com/signalnerve/workers-graphql-server">on GitHub</a>.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[GraphQL]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">l68Fsw4jkO2iFbriaKvyD</guid>
            <dc:creator>Kristian Freeman</dc:creator>
        </item>
        <item>
            <title><![CDATA[Building a To-Do List with Workers and KV]]></title>
            <link>https://blog.cloudflare.com/building-a-to-do-list-with-workers-and-kv/</link>
            <pubDate>Tue, 21 May 2019 13:30:00 GMT</pubDate>
            <description><![CDATA[ In this tutorial, we’ll build a todo list application in HTML, CSS and JavaScript, with a twist: all the data should be stored inside of the newly-launched Workers KV, and the application itself should be served directly from Cloudflare’s edge network, using Cloudflare Workers. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>In this tutorial, we’ll build a todo list application in HTML, CSS and JavaScript, with a twist: all the data should be stored inside of the newly-launched Workers KV, and the application itself should be served directly from Cloudflare’s edge network, using <a href="https://www.cloudflare.com/products/cloudflare-workers/">Cloudflare Workers</a>.</p><p>To start, let’s break this project down into a couple different discrete steps. In particular, it can help to focus on the constraint of working with Workers KV, as handling data is generally the most complex part of building an application:</p><ol><li><p>Build a todos data structure</p></li><li><p>Write the todos into Workers KV</p></li><li><p>Retrieve the todos from Workers KV</p></li><li><p>Return an HTML page to the client, including the todos (if they exist)</p></li><li><p>Allow creation of new todos in the UI</p></li><li><p>Allow completion of todos in the UI</p></li><li><p>Handle todo updates</p></li></ol><p>This task order is pretty convenient, because it’s almost perfectly split into two parts: first, understanding the Cloudflare/API-level things we need to know about Workers <i>and</i> KV, and second, actually building up a user interface to work with the data.</p>
    <div>
      <h3>Understanding Workers</h3>
      <a href="#understanding-workers">
        
      </a>
    </div>
    <p>In terms of implementation, a great deal of this project is centered around KV - although that may be the case, it’s useful to break down <i>what</i> Workers are exactly.</p><p>Service Workers are background scripts that run in your browser, alongside your application. Cloudflare Workers are the same concept, but super-powered: your Worker scripts run on Cloudflare’s edge network, in-between your application and the client’s browser. This opens up a huge amount of opportunity for interesting integrations, especially considering the network’s massive scale around the world. Here’s some of the use-cases that I think are the most interesting:</p><ol><li><p>Custom security/filter rules to block bad actors before they ever reach the origin</p></li><li><p>Replacing/augmenting your website’s content based on the request content (i.e. user agents and other headers)</p></li><li><p>Caching requests to improve performance, or using Cloudflare KV to optimize high-read tasks in your application</p></li><li><p>Building an application <i>directly</i> on the edge, removing the dependence on origin servers entirely</p></li></ol><p>For this project, we’ll lean heavily towards the latter end of that list, building an application that clients communicate with, served on Cloudflare’s edge network. This means that it’ll be globally available, with low-latency, while still allowing the ease-of-use in building applications directly in JavaScript.</p>
    <div>
      <h3>Setting up a canvas</h3>
      <a href="#setting-up-a-canvas">
        
      </a>
    </div>
    <p>To start, I wanted to approach this project from the bare minimum: no frameworks, JS utilities, or anything like that. In particular, I was most interested in writing a project from scratch and serving it directly from the edge. Normally, I would deploy a site to something like <a href="https://pages.github.com/">GitHub Pages</a>, but avoiding the need for an origin server altogether seems like a really powerful (and performant idea) - let’s try it!</p><p>I also considered using <a href="https://todomvc.com/">TodoMVC</a> as the blueprint for building the functionality for the application, but even the <a href="http://todomvc.com/examples/vanillajs/#/">Vanilla JS</a> version is a pretty impressive amount of <a href="https://github.com/tastejs/todomvc/tree/gh-pages/examples/vanillajs">code</a>, including a number of Node packages - it wasn’t exactly a concise chunk of code to just dump into the Worker itself.</p><p>Instead, I decided to approach the beginnings of this project by building a simple, blank HTML page, and including it inside of the Worker. To start, we’ll sketch something out locally, like this:</p>
            <pre><code>&lt;!DOCTYPE html&gt;
&lt;html&gt;
  &lt;head&gt;
    &lt;meta charset="UTF-8"&gt;
    &lt;meta name="viewport" content="width=device-width,initial-scale=1"&gt;
    &lt;title&gt;Todos&lt;/title&gt;
  &lt;/head&gt;
  &lt;body&gt;
    &lt;h1&gt;Todos&lt;/h1&gt;
  &lt;/body&gt;
&lt;/html&gt;</code></pre>
            <p>Hold on to this code - we’ll add it later, inside of the Workers script. For the purposes of the tutorial, I’ll be serving up this project at <a href="http://todo.kristianfreeman.com/">todo.kristianfreeman.com</a>. My personal website was already <a href="https://www.cloudflare.com/developer-platform/solutions/hosting/">hosted on Cloudflare</a>, and since I’ll be serving, it was time to create my first Worker.</p>
    <div>
      <h3>Creating a worker</h3>
      <a href="#creating-a-worker">
        
      </a>
    </div>
    <p>Inside of my Cloudflare account, I hopped into the Workers tab and launched the Workers editor.</p><p>This is one of my favorite features of the editor - working with your actual website, understanding <i>how</i> the worker will interface with your existing project.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/121XS5g0iERXCdVwilld5G/8fe0b9a7e7b869aaf2b5d5bba3c51d98/image4-2.png" />
            
            </figure><p>The process of writing a Worker should be familiar to anyone who’s used the fetch library before. In short, the default code for a Worker hooks into the <code>fetch</code> event, passing the request of that event into a custom function, <code>handleRequest</code>:</p>
            <pre><code>addEventListener('fetch', event =&gt; {
  event.respondWith(handleRequest(event.request))
})</code></pre>
            <p>Within <code>handleRequest</code>, we make the actual request, using <code>fetch</code>, and return the response to the client. In short, we have a place to intercept the response body, but by default, we let it pass-through:</p>
            <pre><code>async function handleRequest(request) {
  console.log('Got request', request)
  const response = await fetch(request)
  console.log('Got response', response)
  return response
}</code></pre>
            <p>So, given this, where do we begin actually <i>doing stuff</i> with our worker?</p><p>Unlike the default code given to you in the Workers interface, we want to skip fetching the incoming request: instead, we’ll construct a new <code>Response</code>, and serve it directly from the edge:</p>
            <pre><code>async function handleRequest(request) {
  const response = new Response("Hello!")
  return response
}</code></pre>
            <p>Given that very small functionality we’ve added to the worker, let’s deploy it. Moving into the “Routes” tab of the Worker editor, I added the route <code>https://todo.kristianfreeman.com/*</code> and attached it to the cloudflare-worker-todos script.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/29QYuFrFiwZKVwZHkWN90v/851c0e9b95c03badc9eccbee41079a20/image5.png" />
            
            </figure><p>Once attached, I deployed the worker, and voila! Visiting todo.kristianfreeman.com in-browser gives me my simple “Hello!” response back.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/Boo3xWgZdHEDHgpUgIrKy/594ae6a988d40a13253af74937d9a964/Screen-Shot-2019-05-15-at-10.12.04-AM.png" />
            
            </figure>
    <div>
      <h3>Writing data to KV</h3>
      <a href="#writing-data-to-kv">
        
      </a>
    </div>
    <p>The next step is to populate our todo list with actual data. To do this, we’ll make use of Cloudflare’s Workers KV - it’s a simple key-value store that you can access inside of your Worker script to read (and write, although it’s less common) data.</p><p>To get started with KV, we need to set up a “namespace”. All of our cached data will be stored inside that namespace, and given just a bit of configuration, we can access that namespace inside the script with a predefined variable.</p><p>I’ll create a new namespace in the Workers dashboard, called <code>KRISTIAN_TODOS</code>, and in the Worker editor, I’ll expose the namespace by binding it to the variable <code>KRISTIAN_TODOS</code>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4i4hCpAgNXFTWxg8X2CXW2/8f67cbecd052f177ca02ce089b511e65/image1-6.png" />
            
            </figure><p>Given the presence of <code>KRISTIAN_TODOS</code> in my script, it’s time to understand the KV API. At time of writing, a KV namespace has three primary methods you can use to interface with your cache: <code>get</code>, <code>put</code>, and <code>delete</code>. Pretty straightforward!</p><p>Let’s start storing data by defining an initial set of data, which we’ll put inside of the cache using the put method. I’ve opted to define an object, <code>defaultData</code>, instead of a simple array of todos: we may want to store metadata and other information inside of this cache object later on. Given that data object, I’ll use <code>JSON.stringify</code> to put a simple string into the cache:</p>
            <pre><code>async function handleRequest(request) {
  // ...previous code
  
  const defaultData = { 
    todos: [
      {
        id: 1,
        name: 'Finish the Cloudflare Workers blog post',
          completed: false
      }
    ] 
  }
  KRISTIAN_TODOS.put("data", JSON.stringify(defaultData))
}
</code></pre>
            <p>The Worker KV data store is <i>eventually</i> consistent: writing to the cache means that it will become available <i>eventually</i>, but it’s possible to attempt to read a value back from the cache immediately after writing it, only to find that the cache hasn’t been updated yet.</p><p>Given the presence of data in the cache, and the assumption that our cache is eventually consistent, we should adjust this code slightly: first, we should actually read from the cache, parsing the value back out, and using it as the data source if exists. If it doesn’t, we’ll refer to <code>defaultData</code>, setting it as the data source <i>for now</i> (remember, it should be set in the future… <i>eventually</i>), while also setting it in the cache for future use. After breaking out the code into a few functions for simplicity, the result looks like this:</p>
            <pre><code>const defaultData = { 
  todos: [
    {
      id: 1,
      name: 'Finish the Cloudflare Workers blog post',
      completed: false
    }
  ] 
}

const setCache = data =&gt; KRISTIAN_TODOS.put("data", data)
const getCache = () =&gt; KRISTIAN_TODOS.get("data")

async function getTodos(request) {
  // ... previous code
  
  let data;
  const cache = await getCache()
  if (!cache) {
    await setCache(JSON.stringify(defaultData))
    data = defaultData
  } else {
    data = JSON.parse(cache)
  }
}</code></pre>
            
    <div>
      <h3>Rendering data from KV</h3>
      <a href="#rendering-data-from-kv">
        
      </a>
    </div>
    <p>Given the presence of data in our code, which is the cached data object for our application, we should actually take this data and make it available on screen.</p><p>In our Workers script, we’ll make a new variable, <code>html</code>, and use it to build up a static HTML template that we can serve to the client. In <code>handleRequest</code>, we can construct a new <code>Response</code> (with a <code>Content-Type</code> header of <code>text/html</code>), and serve it to the client:</p>
            <pre><code>const html = `
&lt;!DOCTYPE html&gt;
&lt;html&gt;
  &lt;head&gt;
    &lt;meta charset="UTF-8"&gt;
    &lt;meta name="viewport" content="width=device-width,initial-scale=1"&gt;
    &lt;title&gt;Todos&lt;/title&gt;
  &lt;/head&gt;
  &lt;body&gt;
    &lt;h1&gt;Todos&lt;/h1&gt;
  &lt;/body&gt;
&lt;/html&gt;
`

async function handleRequest(request) {
  const response = new Response(html, {
    headers: { 'Content-Type': 'text/html' }
  })
  return response
}</code></pre>
            
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6JYK6WXof6rGxmk6ovyH27/77f4df33d945cf673df4510bf650b16b/Screen-Shot-2019-05-15-at-10.06.57-AM.png" />
            
            </figure><p>We have a static HTML site being rendered, and now we can begin populating it with data! In the body, we’ll add a <code>ul</code> tag with an id of <code>todos</code>:</p>
            <pre><code>&lt;body&gt;
  &lt;h1&gt;Todos&lt;/h1&gt;
  &lt;ul id="todos"&gt;&lt;/ul&gt;
&lt;/body&gt;</code></pre>
            <p>Given that body, we can also add a script <i>after</i> the body that takes a todos array, loops through it, and for each todo in the array, creates a <code>li</code> element and appends it to the todos list:</p>
            <pre><code>&lt;script&gt;
  window.todos = [];
  var todoContainer = document.querySelector("#todos");
  window.todos.forEach(todo =&gt; {
    var el = document.createElement("li");
    el.innerText = todo.name;
    todoContainer.appendChild(el);
  });
&lt;/script&gt;</code></pre>
            <p>Our static page can take in <code>window.todos</code>, and render HTML based on it, but we haven’t actually passed in any data from KV. To do this, we’ll need to make a couple changes.</p><p>First, our <code>html</code> <i>variable</i> will change to a <i>function</i>. The function will take in an argument, <code>todos</code>, which will populate the <code>window.todos</code> variable in the above code sample:</p>
            <pre><code>const html = todos =&gt; `
&lt;!doctype html&gt;
&lt;html&gt;
  &lt;!-- ... --&gt;
  &lt;script&gt;
    window.todos = ${todos || []}
    var todoContainer = document.querySelector("#todos");
    // ...
  &lt;script&gt;
&lt;/html&gt;
`</code></pre>
            <p>In <code>handleRequest</code>, we can use the retrieved KV data to call the <code>html</code> function, and generate a <code>Response</code> based on it:</p>
            <pre><code>async function handleRequest(request) {
  let data;
  
  // Set data using cache or defaultData from previous section...
  
  const body = html(JSON.stringify(data.todos))
  const response = new Response(body, {
    headers: { 'Content-Type': 'text/html' }
  })
  return response
}</code></pre>
            <p>The finished product looks something like this:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/579BL1kiP7vJc6bxWcDD82/057188acb0323d3764e76419b13e7469/image3-3.png" />
            
            </figure>
    <div>
      <h3>Adding todos from the UI</h3>
      <a href="#adding-todos-from-the-ui">
        
      </a>
    </div>
    <p>At this point, we’ve built a Cloudflare Worker that takes data from Cloudflare KV and renders a static page based on it. That static page reads the data, and generates a todo list based on that data. Of course, the piece we’re missing is <i>creating</i> todos, from inside the UI. We know that we can add todos using the KV API - we could simply update the cache by saying <code>KRISTIAN_TODOS.put(newData)</code>, but how do we update it from inside the UI?</p><p>It’s worth noting here that Cloudflare’s Workers documentation suggests that any writes to your KV namespace happen via their API - that is, at its simplest form, a cURL statement:</p>
            <pre><code>curl "&lt;https://api.cloudflare.com/client/v4/accounts/$ACCOUNT_ID/storage/kv/namespaces/$NAMESPACE_ID/values/first-key&gt;" \
  -X PUT \
  -H "X-Auth-Email: $CLOUDFLARE_EMAIL" \
  -H "X-Auth-Key: $CLOUDFLARE_AUTH_KEY" \
  --data 'My first value!'</code></pre>
            <p>We’ll implement something similar by handling a second route in our worker, designed to watch for <code>PUT</code> requests to <code>/</code>. When a body is received at that URL, the worker will send the new todo data to our KV store.</p><p>I’ll add this new functionality to my worker, and in <code>handleRequest</code>, if the request method is a <code>PUT</code>, it will take the request body and update the cache:</p>
            <pre><code>addEventListener('fetch', event =&gt; {
  event.respondWith(handleRequest(event.request))
})

const setCache = data =&gt; KRISTIAN_TODOS.put("data", data)

async function updateTodos(request) {
  const body = await request.text()
  const ip = request.headers.get("CF-Connecting-IP")
  const cacheKey = `data-${ip}`;
  try {
    JSON.parse(body)
    await setCache(body)
    return new Response(body, { status: 200 })
  } catch (err) {
    return new Response(err, { status: 500 })
  }
}

async function handleRequest(request) {
  if (request.method === "PUT") {
    return updateTodos(request);
  } else {
    // Defined in previous code block
    return getTodos(request);
  }
}</code></pre>
            <p>The script is pretty straightforward - we check that the request is a <code>PUT</code>, and wrap the remainder of the code in a <code>try/catch</code> block. First, we parse the body of the request coming in, ensuring that it is JSON, before we update the cache with the new data, and return it to the user. If anything goes wrong, we simply return a 500. If the route is hit with an HTTP method <i>other</i> than <code>PUT</code> - that is, <code>GET</code>, <code>DELETE</code>, or anything else - we return a 404.</p><p>With this script, we can now add some “dynamic” functionality to our HTML page to actually hit this route.</p><p>First, we’ll create an input for our todo “name”, and a button for “submitting” the todo.</p>
            <pre><code>&lt;div&gt;
  &lt;input type="text" name="name" placeholder="A new todo"&gt;&lt;/input&gt;
  &lt;button id="create"&gt;Create&lt;/button&gt;
&lt;/div&gt;</code></pre>
            <p>Given that input and button, we can add a corresponding JavaScript function to watch for clicks on the button - once the button is clicked, the browser will <code>PUT</code> to <code>/</code> and submit the todo.</p>
            <pre><code>var createTodo = function() {
  var input = document.querySelector("input[name=name]");
  if (input.value.length) {
    fetch("/", { 
      method: 'PUT', 
      body: JSON.stringify({ todos: todos }) 
    });
  }
};

document.querySelector("#create")
  .addEventListener('click', createTodo);</code></pre>
            <p>This code updates the cache, but what about our local UI? Remember that the KV cache is <i>eventually consistent</i> - even if we were to update our worker to read from the cache and return it, we have no guarantees it’ll actually be up-to-date. Instead, let’s just update the list of todos locally, by taking our original code for rendering the todo list, making it a re-usable function called <code>populateTodos</code>, and calling it when the page loads <i>and</i> when the cache request has finished:</p>
            <pre><code>var populateTodos = function() {
  var todoContainer = document.querySelector("#todos");
  todoContainer.innerHTML = null;
  window.todos.forEach(todo =&gt; {
    var el = document.createElement("li");
    el.innerText = todo.name;
    todoContainer.appendChild(el);
  });
};

populateTodos();

var createTodo = function() {
  var input = document.querySelector("input[name=name]");
  if (input.value.length) {
    todos = [].concat(todos, { 
      id: todos.length + 1, 
      name: input.value,
      completed: false,
    });
    fetch("/", { 
      method: 'PUT', 
      body: JSON.stringify({ todos: todos }) 
    });
    populateTodos();
    input.value = "";
  }
};

document.querySelector("#create")
  .addEventListener('click', createTodo);</code></pre>
            <p>With the client-side code in place, deploying the new Worker should put all these pieces together. The result is an actual dynamic todo list!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3bexkIHx7QhOOnLX7m0T2a/d1cbb132f91d422682e12a169d1e3b1c/image7.gif" />
            
            </figure>
    <div>
      <h3>Updating todos from the UI</h3>
      <a href="#updating-todos-from-the-ui">
        
      </a>
    </div>
    <p>For the final piece of our (very) basic todo list, we need to be able to update todos - specifically, marking them as completed.</p><p>Luckily, a great deal of the infrastructure for this work is already in place. We can currently update the todo list data in our cache, as evidenced by our <code>createTodo</code> function. Performing updates on a todo, in fact, is much more of a client-side task than a Worker-side one!</p><p>To start, let’s update the client-side code for generating a todo. Instead of a <code>ul</code>-based list, we’ll migrate the todo container <i>and</i> the todos themselves into using <code>div</code>s:</p>
            <pre><code>&lt;!-- &lt;ul id="todos"&gt;&lt;/ul&gt; becomes... --&gt;
&lt;div id="todos"&gt;&lt;/div&gt;</code></pre>
            <p>The <code>populateTodos</code> function can be updated to generate a <code>div</code> for each todo. In addition, we’ll move the name of the todo into a child element of that <code>div</code>:</p>
            <pre><code>var populateTodos = function() {
  var todoContainer = document.querySelector("#todos");
  todoContainer.innerHTML = null;
  window.todos.forEach(todo =&gt; {
    var el = document.createElement("div");
    var name = document.createElement("span");
    name.innerText = todo.name;
    el.appendChild(name);
    todoContainer.appendChild(el);
  });
}</code></pre>
            <p>So far, we’ve designed the client-side part of this code to take an array of todos in, and given that array, render out a list of simple HTML elements. There’s a number of things that we’ve been doing that we haven’t quite had a use for, yet: specifically, the inclusion of IDs, and updating the completed value on a todo. Luckily, these things work well together, in order to support actually updating todos in the UI.</p><p>To start, it would be useful to signify the ID of each todo in the HTML. By doing this, we can then refer to the element later, in order to correspond it to the todo in the JavaScript part of our code. <a href="https://developer.mozilla.org/en-US/docs/Web/API/HTMLElement/dataset"><i>Data attributes</i></a>, and the corresponding dataset method in JavaScript, are a perfect way to implement this. When we generate our <code>div</code> element for each todo, we can simply attach a data attribute called <code>todo</code> to each <code>div</code>:</p>
            <pre><code>window.todos.forEach(todo =&gt; {
  var el = document.createElement("div");
  el.dataset.todo = todo.id
  // ... more setup

  todoContainer.appendChild(el);
});</code></pre>
            <p>Inside our HTML, each <code>div</code> for a todo now has an attached data attribute, which looks like:</p>
            <pre><code>&lt;div data-todo="1"&gt;&lt;/div&gt;
&lt;div data-todo="2"&gt;&lt;/div&gt;</code></pre>
            <p>Now we can generate a checkbox for each todo element. This checkbox will default to unchecked for new todos, of course, but we can mark it as checked as the element is rendered in the window:</p>
            <pre><code>window.todos.forEach(todo =&gt; {
  var el = document.createElement("div");
  el.dataset.todo = todo.id
  
  var name = document.createElement("span");
  name.innerText = todo.name;
  
  var checkbox = document.createElement("input")
  checkbox.type = "checkbox"
  checkbox.checked = todo.completed ? 1 : 0;

  el.appendChild(checkbox);
  el.appendChild(name);
  todoContainer.appendChild(el);
})</code></pre>
            <p>The checkbox is set up to correctly reflect the value of completed on each todo, but it doesn’t yet update when we actually check the box! To do this, we’ll add an event listener on the <code>click</code> event, calling <code>completeTodo</code>. Inside the function, we’ll inspect the checkbox element, finding its parent (the todo <code>div</code>), and using the <code>todo</code> data attribute on it to find the corresponding todo in our data. Given that todo, we can toggle the value of completed, update our data, and re-render the UI:</p>
            <pre><code>var completeTodo = function(evt) {
  var checkbox = evt.target;
  var todoElement = checkbox.parentNode;
  
  var newTodoSet = [].concat(window.todos)
  var todo = newTodoSet.find(t =&gt; 
    t.id == todoElement.dataset.todo
  );
  todo.completed = !todo.completed;
  todos = newTodoSet;
  updateTodos()
}</code></pre>
            <p>The final result of our code is a system that simply checks the <code>todos</code> variable, updates our Cloudflare KV cache with that value, and then does a straightforward re-render of the UI based on the data it has locally.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2dSK1aNSHxnICPPqdBZeS1/4612465de022714c1d7a7cc64e261d55/image8-1.png" />
            
            </figure>
    <div>
      <h3>Conclusions and next steps</h3>
      <a href="#conclusions-and-next-steps">
        
      </a>
    </div>
    <p>With this, we’ve created a pretty remarkable project: an almost entirely static HTML/JS application, transparently powered by Cloudflare KV and Workers, served at the edge. There’s a number of additions to be made to this application, whether you want to implement a better design (I’ll leave this as an exercise for readers to implement - you can see my version at <a href="https://todo.kristianfreeman.com/">todo.kristianfreeman.com</a>), security, speed, etc.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4Vif8WsAf5slR8KM3NLeGv/6cd59bcdf0c30a6df7e4bbf9eef916ba/image2-1.png" />
            
            </figure><p>One interesting and fairly trivial addition is implementing per-user caching. Of course, right now, the cache key is simply “data”: anyone visiting the site will share a todo list with any other user. Because we have the request information inside of our worker, it’s easy to make this data user-specific. For instance, implementing per-user caching by generating the cache key based on the requesting IP:</p>
            <pre><code>const ip = request.headers.get("CF-Connecting-IP")
const cacheKey = `data-${ip}`;
const getCache = key =&gt; KRISTIAN_TODOS.get(key)
getCache(cacheKey)</code></pre>
            <p>One more deploy of our Workers project, and we have a full todo list application, with per-user functionality, served at the edge!</p><p>The final version of our Workers script looks like this:</p>
            <pre><code>const html = todos =&gt; `
&lt;!DOCTYPE html&gt;
&lt;html&gt;
  &lt;head&gt;
    &lt;meta charset="UTF-8"&gt;
    &lt;meta name="viewport" content="width=device-width,initial-scale=1"&gt;
    &lt;title&gt;Todos&lt;/title&gt;
    &lt;link href="https://cdn.jsdelivr.net/npm/tailwindcss/dist/tailwind.min.css" rel="stylesheet"&gt;&lt;/link&gt;
  &lt;/head&gt;

  &lt;body class="bg-blue-100"&gt;
    &lt;div class="w-full h-full flex content-center justify-center mt-8"&gt;
      &lt;div class="bg-white shadow-md rounded px-8 pt-6 py-8 mb-4"&gt;
        &lt;h1 class="block text-grey-800 text-md font-bold mb-2"&gt;Todos&lt;/h1&gt;
        &lt;div class="flex"&gt;
          &lt;input class="shadow appearance-none border rounded w-full py-2 px-3 text-grey-800 leading-tight focus:outline-none focus:shadow-outline" type="text" name="name" placeholder="A new todo"&gt;&lt;/input&gt;
          &lt;button class="bg-blue-500 hover:bg-blue-800 text-white font-bold ml-2 py-2 px-4 rounded focus:outline-none focus:shadow-outline" id="create" type="submit"&gt;Create&lt;/button&gt;
        &lt;/div&gt;
        &lt;div class="mt-4" id="todos"&gt;&lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/body&gt;

  &lt;script&gt;
    window.todos = ${todos || []}

    var updateTodos = function() {
      fetch("/", { method: 'PUT', body: JSON.stringify({ todos: window.todos }) })
      populateTodos()
    }

    var completeTodo = function(evt) {
      var checkbox = evt.target
      var todoElement = checkbox.parentNode
      var newTodoSet = [].concat(window.todos)
      var todo = newTodoSet.find(t =&gt; t.id == todoElement.dataset.todo)
      todo.completed = !todo.completed
      window.todos = newTodoSet
      updateTodos()
    }

    var populateTodos = function() {
      var todoContainer = document.querySelector("#todos")
      todoContainer.innerHTML = null

      window.todos.forEach(todo =&gt; {
        var el = document.createElement("div")
        el.className = "border-t py-4"
        el.dataset.todo = todo.id

        var name = document.createElement("span")
        name.className = todo.completed ? "line-through" : ""
        name.innerText = todo.name

        var checkbox = document.createElement("input")
        checkbox.className = "mx-4"
        checkbox.type = "checkbox"
        checkbox.checked = todo.completed ? 1 : 0
        checkbox.addEventListener('click', completeTodo)

        el.appendChild(checkbox)
        el.appendChild(name)
        todoContainer.appendChild(el)
      })
    }

    populateTodos()

    var createTodo = function() {
      var input = document.querySelector("input[name=name]")
      if (input.value.length) {
        window.todos = [].concat(todos, { id: window.todos.length + 1, name: input.value, completed: false })
        input.value = ""
        updateTodos()
      }
    }

    document.querySelector("#create").addEventListener('click', createTodo)
  &lt;/script&gt;
&lt;/html&gt;
`

const defaultData = { todos: [] }

const setCache = (key, data) =&gt; KRISTIAN_TODOS.put(key, data)
const getCache = key =&gt; KRISTIAN_TODOS.get(key)

async function getTodos(request) {
  const ip = request.headers.get('CF-Connecting-IP')
  const cacheKey = `data-${ip}`
  let data
  const cache = await getCache(cacheKey)
  if (!cache) {
    await setCache(cacheKey, JSON.stringify(defaultData))
    data = defaultData
  } else {
    data = JSON.parse(cache)
  }
  const body = html(JSON.stringify(data.todos || []))
  return new Response(body, {
    headers: { 'Content-Type': 'text/html' },
  })
}

async function updateTodos(request) {
  const body = await request.text()
  const ip = request.headers.get('CF-Connecting-IP')
  const cacheKey = `data-${ip}`
  try {
    JSON.parse(body)
    await setCache(cacheKey, body)
    return new Response(body, { status: 200 })
  } catch (err) {
    return new Response(err, { status: 500 })
  }
}

async function handleRequest(request) {
  if (request.method === 'PUT') {
    return updateTodos(request)
  } else {
    return getTodos(request)
  }
}

addEventListener('fetch', event =&gt; {
  event.respondWith(handleRequest(event.request))
})</code></pre>
            <p>You can find the source code for this project, as well as a README with deployment instructions, on <a href="https://github.com/signalnerve/cloudflare-workers-todos">GitHub</a>.</p> ]]></content:encoded>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Cloudflare Workers KV]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Programming]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">1KZ1qTWOTEOhmuFFXT6xuC</guid>
            <dc:creator>Kristian Freeman</dc:creator>
        </item>
    </channel>
</rss>