
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Tue, 14 Apr 2026 23:00:27 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Building a serverless, post-quantum Matrix homeserver]]></title>
            <link>https://blog.cloudflare.com/serverless-matrix-homeserver-workers/</link>
            <pubDate>Tue, 27 Jan 2026 14:00:00 GMT</pubDate>
            <description><![CDATA[ As a proof of concept, we built a Matrix homeserver to Cloudflare Workers — delivering encrypted messaging at the edge with automatic post-quantum cryptography. ]]></description>
            <content:encoded><![CDATA[ <p><sup><i>* This post was updated at 11:45 a.m. Pacific time to clarify that the use case described here is a proof of concept and a personal project. Some sections have been updated for clarity.</i></sup></p><p>Matrix is the gold standard for decentralized, end-to-end encrypted communication. It powers government messaging systems, open-source communities, and privacy-focused organizations worldwide. </p><p>For the individual developer, however, the appeal is often closer to home: bridging fragmented chat networks (like Discord and Slack) into a single inbox, or simply ensuring your conversation history lives on infrastructure you control. Functionally, Matrix operates as a decentralized, eventually consistent state machine. Instead of a central server pushing updates, homeservers exchange signed JSON events over HTTP, using a conflict resolution algorithm to merge these streams into a unified view of the room's history.</p><p><b>But there is a "tax" to running it. </b>Traditionally, operating a Matrix <a href="https://matrix.org/homeserver/about/"><u>homeserver</u></a> has meant accepting a heavy operational burden. You have to provision virtual private servers (VPS), tune PostgreSQL for heavy write loads, manage Redis for caching, configure <a href="https://www.cloudflare.com/learning/cdn/glossary/reverse-proxy/"><u>reverse proxies</u></a>, and handle rotation for <a href="https://www.cloudflare.com/application-services/products/ssl/">TLS certificates</a>. It’s a stateful, heavy beast that demands to be fed time and money, whether you’re using it a lot or a little.</p><p>We wanted to see if we could eliminate that tax entirely.</p><p><b>Spoiler: We could.</b> In this post, we’ll explain how we ported a Matrix homeserver to <a href="https://workers.cloudflare.com/"><u>Cloudflare Workers</u></a>. The resulting proof of concept is a serverless architecture where operations disappear, costs scale to zero when idle, and every connection is protected by <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-post-quantum-cryptography/"><u>post-quantum cryptography</u></a> by default. You can view the source code and <a href="https://github.com/nkuntz1934/matrix-workers"><u>deploy your own instance directly from Github</u></a>.</p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/nkuntz1934/matrix-workers"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p></p>
    <div>
      <h2>From Synapse to Workers</h2>
      <a href="#from-synapse-to-workers">
        
      </a>
    </div>
    <p>Our starting point was <a href="https://github.com/matrix-org/synapse"><u>Synapse</u></a>, the Python-based reference Matrix homeserver designed for traditional deployments. PostgreSQL for persistence, Redis for caching, filesystem for media.</p><p>Porting it to Workers meant questioning every storage assumption we’d taken for granted.</p><p>The challenge was storage. Traditional homeservers assume strong consistency via a central SQL database. Cloudflare <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a> offers a powerful alternative. This primitive gives us the strong consistency and atomicity required for Matrix state resolution, while still allowing the application to run at the edge.</p><p>We ported the core Matrix protocol logic — event authorization, room state resolution, cryptographic verification — in TypeScript using the Hono framework. D1 replaces PostgreSQL, KV replaces Redis, R2 replaces the filesystem, and Durable Objects handle real-time coordination.</p><p>Here’s how the mapping worked out:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1JTja38UZRbFygluawrnz1/9bce290e3070155c734e874c17051551/BLOG-3101_2.png" />
          </figure>
    <div>
      <h2>From monolith to serverless</h2>
      <a href="#from-monolith-to-serverless">
        
      </a>
    </div>
    <p>Moving to Cloudflare Workers brings several advantages for a developer: simple deployment, lower costs, low latency, and built-in security.</p><p><b>Easy deployment: </b>A traditional Matrix deployment requires server provisioning, PostgreSQL administration, Redis cluster management, <a href="https://www.cloudflare.com/application-services/solutions/certificate-lifecycle-management/">TLS certificate renewal</a>, load balancer configuration, monitoring infrastructure, and on-call rotations.</p><p>With Workers, deployment is simply: wrangler deploy. Workers handles TLS, load balancing, DDoS protection, and global distribution. </p><p><b>Usage-based costs: </b>Traditional homeservers cost money whether anyone is using them or not. Workers pricing is request-based, so you pay when you’re using it, but costs drop to near zero when everyone’s asleep. </p><p><b>Lower latency globally:</b> A traditional Matrix homeserver in us-east-1 adds 200ms+ latency for users in Asia or Europe. Workers, meanwhile, run in 300+ locations worldwide. When a user in Tokyo sends a message, the Worker executes in Tokyo. </p><p><b>Built-in security: </b>Matrix homeservers can be high-value targets: They handle encrypted communications, store message history, and authenticate users. Traditional deployments require careful hardening: firewall configuration, rate limiting, DDoS mitigation, WAF rules, IP reputation filtering.</p><p>Workers provide all of this by default. </p>
    <div>
      <h3>Post-quantum protection </h3>
      <a href="#post-quantum-protection">
        
      </a>
    </div>
    <p>Cloudflare deployed post-quantum hybrid key agreement across all <a href="https://www.cloudflare.com/learning/ssl/why-use-tls-1.3/"><u>TLS 1.3</u></a> connections in <a href="https://blog.cloudflare.com/post-quantum-for-all/"><u>October 2022</u></a>. Every connection to our Worker automatically negotiates X25519MLKEM768 — a hybrid combining classical X25519 with ML-KEM, the post-quantum algorithm standardized by NIST.</p><p>Classical cryptography relies on mathematical problems that are hard for traditional computers but trivial for quantum computers running Shor’s algorithm. ML-KEM is based on lattice problems that remain hard even for quantum computers. The hybrid approach means both algorithms must fail for the connection to be compromised.</p>
    <div>
      <h3>Following a message through the system</h3>
      <a href="#following-a-message-through-the-system">
        
      </a>
    </div>
    <p>Understanding where encryption happens matters for security architecture. When someone sends a message through our homeserver, here’s the actual path:</p><p>The sender’s client takes the plaintext message and encrypts it with Megolm — Matrix’s end-to-end encryption. This encrypted payload then gets wrapped in TLS for transport. On Cloudflare, that TLS connection uses X25519MLKEM768, making it quantum-resistant.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/wGGYZ4LYspufH1c4psmL1/28acad8ab8e6535525dda413669c2d74/BLOG-3101_3.png" />
          </figure><p>The Worker terminates TLS, but what it receives is still encrypted — the Megolm ciphertext. We store that ciphertext in D1, index it by room and timestamp, and deliver it to recipients. But we never see the plaintext. The message “Hello, world” exists only on the sender’s device and the recipient’s device.</p><p>When the recipient syncs, the process reverses. They receive the encrypted payload over another quantum-resistant TLS connection, then decrypt locally with their Megolm session keys.</p>
    <div>
      <h3>Two layers, independent protection</h3>
      <a href="#two-layers-independent-protection">
        
      </a>
    </div>
    <p>This protects via two encryption layers that operate independently:</p><p>The <a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/"><u>transport layer (TLS)</u></a> protects data in transit. It’s encrypted at the client and decrypted at the Cloudflare edge. With X25519MLKEM768, this layer is now post-quantum.</p><p>The <a href="https://www.cloudflare.com/learning/ddos/what-is-layer-7/"><u>application layer</u></a> (Megolm E2EE) protects message content. It’s encrypted on the sender’s device and decrypted only on recipient devices. This uses classical Curve25519 cryptography.</p>
    <div>
      <h3>Who sees what</h3>
      <a href="#who-sees-what">
        
      </a>
    </div>
    <p>Any Matrix homeserver operator — whether running Synapse on a VPS or this implementation on Workers — can see metadata: which rooms exist, who’s in them, when messages were sent. But no one in the infrastructure chain can see the message content, because the E2EE payload is encrypted on sender devices before it ever hits the network. Cloudflare terminates TLS and passes requests to your Worker, but both see only Megolm ciphertext. Media in encrypted rooms is encrypted client-side before upload, and private keys never leave user devices.</p>
    <div>
      <h3>What traditional deployments would need</h3>
      <a href="#what-traditional-deployments-would-need">
        
      </a>
    </div>
    <p>Achieving post-quantum TLS on a traditional Matrix deployment would require upgrading OpenSSL or BoringSSL to a version supporting ML-KEM, configuring cipher suite preferences correctly, testing client compatibility across all Matrix apps, monitoring for TLS negotiation failures, staying current as PQC standards evolve, and handling clients that don’t support PQC gracefully.</p><p>With Workers, it’s automatic. Chrome, Firefox, and Edge all support X25519MLKEM768. Mobile apps using platform TLS stacks inherit this support. The security posture improves as Cloudflare’s <a href="https://developers.cloudflare.com/ssl/post-quantum-cryptography/"><u>PQC</u></a> deployment expands — no action required on our part.</p>
    <div>
      <h2>The storage architecture that made it work</h2>
      <a href="#the-storage-architecture-that-made-it-work">
        
      </a>
    </div>
    <p>The key insight from porting Tuwunel was that different data needs different consistency guarantees. We use each Cloudflare primitive for what it does best.</p>
    <div>
      <h3>D1 for the data model</h3>
      <a href="#d1-for-the-data-model">
        
      </a>
    </div>
    <p>D1 stores everything that needs to survive restarts and support queries: users, rooms, events, device keys. Over 25 tables covering the full Matrix data model. </p>
            <pre><code>CREATE TABLE events (
	event_id TEXT PRIMARY KEY,
	room_id TEXT NOT NULL,
	sender TEXT NOT NULL,
	event_type TEXT NOT NULL,
	state_key TEXT,
	content TEXT NOT NULL,
	origin_server_ts INTEGER NOT NULL,
	depth INTEGER NOT NULL
);
</code></pre>
            <p><a href="https://www.cloudflare.com/developer-platform/products/d1/">D1’s SQLite foundation</a> meant we could port Tuwunel’s queries with minimal changes. Joins, indexes, and aggregations work as expected.</p><p>We learned one hard lesson: D1’s eventual consistency breaks foreign key constraints. A write to rooms might not be visible when a subsequent write to events checks the foreign key. We removed all foreign keys and enforce referential integrity in application code.</p>
    <div>
      <h3>KV for ephemeral state</h3>
      <a href="#kv-for-ephemeral-state">
        
      </a>
    </div>
    <p>OAuth authorization codes live for 10 minutes, while refresh tokens last for a session.</p>
            <pre><code>// Store OAuth code with 10-minute TTL
kv.put(&amp;format!("oauth_code:{}", code), &amp;token_data)?
	.expiration_ttl(600)
	.execute()
	.await?;</code></pre>
            <p>KV’s global distribution means OAuth flows work fast regardless of where users are located.</p>
    <div>
      <h3>R2 for media</h3>
      <a href="#r2-for-media">
        
      </a>
    </div>
    <p>Matrix media maps directly to R2, so you can upload an image, get back a content-addressed URL – and egress is free.</p>
    <div>
      <h3>Durable Objects for atomicity</h3>
      <a href="#durable-objects-for-atomicity">
        
      </a>
    </div>
    <p>Some operations can’t tolerate eventual consistency. When a client claims a one-time encryption key, that key must be atomically removed. If two clients claim the same key, encrypted session establishment fails.</p><p>Durable Objects provide single-threaded, strongly consistent storage:</p>
            <pre><code>#[durable_object]
pub struct UserKeysObject {
	state: State,
	env: Env,
}

impl UserKeysObject {
	async fn claim_otk(&amp;self, algorithm: &amp;str) -&gt; Result&lt;Option&lt;Key&gt;&gt; {
    	// Atomic within single DO - no race conditions possible
    	let mut keys: Vec&lt;Key&gt; = self.state.storage()
        	.get("one_time_keys")
        	.await
        	.ok()
        	.flatten()
        	.unwrap_or_default();

    	if let Some(idx) = keys.iter().position(|k| k.algorithm == algorithm) {
        	let key = keys.remove(idx);
        	self.state.storage().put("one_time_keys", &amp;keys).await?;
        	return Ok(Some(key));
    	}
    	Ok(None)
	}
}</code></pre>
            <p>We use UserKeysObject for E2EE key management, RoomObject for real-time room events like typing indicators and read receipts, and UserSyncObject for to-device message queues. The rest flows through D1.</p>
    <div>
      <h3>Complete end-to-end encryption, complete OAuth</h3>
      <a href="#complete-end-to-end-encryption-complete-oauth">
        
      </a>
    </div>
    <p>Our implementation supports the full Matrix E2EE stack: device keys, cross-signing keys, one-time keys, fallback keys, key backup, and dehydrated devices.</p><p>Modern Matrix clients use OAuth 2.0/OIDC instead of legacy password flows. We implemented a complete OAuth provider, with dynamic client registration, PKCE authorization, RS256-signed JWT tokens, token refresh with rotation, and standard OIDC discovery endpoints.
</p>
            <pre><code>curl https://matrix.example.com/.well-known/openid-configuration
{
  "issuer": "https://matrix.example.com",
  "authorization_endpoint": "https://matrix.example.com/oauth/authorize",
  "token_endpoint": "https://matrix.example.com/oauth/token",
  "jwks_uri": "https://matrix.example.com/.well-known/jwks.json"
}
</code></pre>
            <p>Point Element or any Matrix client at the domain, and it discovers everything automatically.</p>
    <div>
      <h2>Sliding Sync for mobile</h2>
      <a href="#sliding-sync-for-mobile">
        
      </a>
    </div>
    <p>Traditional Matrix sync transfers megabytes of data on initial connection,  draining mobile battery and data plans.</p><p>Sliding Sync lets clients request exactly what they need. Instead of downloading everything, clients get the 20 most recent rooms with minimal state. As users scroll, they request more ranges. The server tracks position and sends only deltas.</p><p>Combined with edge execution, mobile clients can connect and render their room list in under 500ms, even on slow networks.</p>
    <div>
      <h2>The comparison</h2>
      <a href="#the-comparison">
        
      </a>
    </div>
    <p>For a homeserver serving a small team:</p><table><tr><th><p> </p></th><th><p><b>Traditional (VPS)</b></p></th><th><p><b>Workers</b></p></th></tr><tr><td><p>Monthly cost (idle)</p></td><td><p>$20-50</p></td><td><p>&lt;$1</p></td></tr><tr><td><p>Monthly cost (active)</p></td><td><p>$20-50</p></td><td><p>$3-10</p></td></tr><tr><td><p>Global latency</p></td><td><p>100-300ms</p></td><td><p>20-50ms</p></td></tr><tr><td><p>Time to deploy</p></td><td><p>Hours</p></td><td><p>Seconds</p></td></tr><tr><td><p>Maintenance</p></td><td><p>Weekly</p></td><td><p>None</p></td></tr><tr><td><p>DDoS protection</p></td><td><p>Additional cost</p></td><td><p>Included</p></td></tr><tr><td><p>Post-quantum TLS</p></td><td><p>Complex setup</p></td><td><p>Automatic</p></td></tr></table><p><sup>*</sup><sup><i>Based on public rates and metrics published by DigitalOcean, AWS Lightsail, and Linode as of January 15, 2026.</i></sup></p><p>The economics improve further at scale. Traditional deployments require capacity planning and over-provisioning. Workers scale automatically.</p>
    <div>
      <h2>The future of decentralized protocols</h2>
      <a href="#the-future-of-decentralized-protocols">
        
      </a>
    </div>
    <p>We started this as an experiment: could Matrix run on Workers? It can—and the approach can work for other stateful protocols, too.</p><p>By mapping traditional stateful components to Cloudflare’s primitives — Postgres to D1, Redis to KV, mutexes to Durable Objects — we can see  that complex applications don't need complex infrastructure. We stripped away the operating system, the database management, and the network configuration, leaving only the application logic and the data itself.</p><p>Workers offers the sovereignty of owning your data, without the burden of owning the infrastructure.</p><p>I have been experimenting with the implementation and am excited for any contributions from others interested in this kind of service. </p><p>Ready to build powerful, real-time applications on Workers? Get started with<a href="https://developers.cloudflare.com/workers/"> <u>Cloudflare Workers</u></a> and explore<a href="https://developers.cloudflare.com/durable-objects/"> <u>Durable Objects</u></a> for your own stateful edge applications. Join our<a href="https://discord.cloudflare.com"> <u>Discord community</u></a> to connect with other developers building at the edge.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Durable Objects]]></category>
            <category><![CDATA[D1]]></category>
            <category><![CDATA[Cloudflare Workers KV]]></category>
            <category><![CDATA[R2]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Rust]]></category>
            <category><![CDATA[WebAssembly]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Encryption]]></category>
            <guid isPermaLink="false">6VOVAMNwIZ18hMaUlC6aqp</guid>
            <dc:creator>Nick Kuntz</dc:creator>
        </item>
        <item>
            <title><![CDATA[Redesigning Workers KV for increased availability and faster performance]]></title>
            <link>https://blog.cloudflare.com/rearchitecting-workers-kv-for-redundancy/</link>
            <pubDate>Fri, 08 Aug 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Workers KV is Cloudflare's global key-value store. After the incident on June 12, we re-architected KV’s redundant storage backend, remove single points of failure, and make substantial improvements.
 ]]></description>
            <content:encoded><![CDATA[ <p>On June 12, 2025, Cloudflare suffered a significant service outage that affected a large set of our critical services. As explained in <a href="https://blog.cloudflare.com/cloudflare-service-outage-june-12-2025/"><u>our blog post about the incident</u></a>, the cause was a failure in the underlying storage infrastructure used by our Workers KV service. Workers KV is not only relied upon by many customers, but serves as critical infrastructure for many other Cloudflare products, handling configuration, authentication and asset delivery across the affected services. Part of this infrastructure was backed by a third-party cloud provider, which experienced an outage on June 12 and directly impacted availability of our KV service.</p><p>Today we're providing an update on the improvements that have been made to Workers KV to ensure that a similar outage cannot happen again. We are now storing all data on our own infrastructure. We are also serving all requests from our own infrastructure in addition to any third-party cloud providers used for redundancy, ensuring high availability and eliminating single points of failure. Finally, the work has meaningfully improved performance and set a clear path for the removal of any reliance on third-party providers as redundant back-ups.</p>
    <div>
      <h2>Background: The Original Architecture</h2>
      <a href="#background-the-original-architecture">
        
      </a>
    </div>
    <p>Workers KV is a global key-value store that supports high read volumes with low latency. Behind the scenes, the service stores data in regional storage and caches data across Cloudflare's network to deliver exceptional read performance, making it ideal for configuration data, static assets, and user preferences that need to be available instantly around the globe.</p><p>Workers KV was initially launched in September 2018, predating Cloudflare-native storage services like Durable Objects and R2. As a result, Workers KV's original design leveraged <a href="https://www.cloudflare.com/learning/cloud/what-is-object-storage/">object storage</a> offerings from multiple third-party cloud service providers, maximizing availability via provider redundancy. The system operated in an active-active configuration, successfully serving requests even when one of the providers was unavailable, experiencing errors, or performing slowly.</p><p>Requests to Workers KV were handled by Storage Gateway Worker (SGW), a service running on Cloudflare Workers. When it received a write request, SGW would simultaneously write the key-value pair to two different third-party object storage providers, ensuring that data was always available from multiple independent sources. Deletes were handled similarly, by writing a special tombstone value in place of the object to mark the key as deleted, with these tombstones garbage collected later.</p><p>Reads from Workers KV could usually be served from Cloudflare's cache, providing reliably low latency. For reads of data not in cache, the system would race requests against both providers and return whichever response arrived first, typically from the geographically closer provider. This racing approach optimized read latency by always taking the fastest response while providing resilience against provider issues.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/47G3VI7yhmPK7xMblLqYH9/1c69708e7e408d8f235964b26586249f/image3.png" />
          </figure><p>Given the inherent difficulty of keeping two independent storage providers synchronized, the architecture included sophisticated machinery to handle data consistency issues between backends. Despite this machinery, consistency edge cases remained more frequent than consumers required due to the inherently imperfect availability of upstream object storage systems and the challenges of maintaining perfect synchronization across independent providers.</p><p>Over the years, the system's implementation evolved significantly, including<a href="https://blog.cloudflare.com/faster-workers-kv/"> <u>a variety of performance improvements we discussed last year</u></a>, but the fundamental dual-provider architecture remained unchanged. This provided a reliable foundation for the massive growth in Workers KV usage while maintaining the performance characteristics that made it valuable for global applications.</p>
    <div>
      <h2>Scaling Challenges and Architectural Trade-offs</h2>
      <a href="#scaling-challenges-and-architectural-trade-offs">
        
      </a>
    </div>
    <p>As Workers KV usage scaled dramatically and access patterns became more diverse, the dual-provider architecture faced mounting operational challenges. The providers had fundamentally different limits, failure modes, APIs, and operational procedures that required constant adaptation. </p><p>The scaling issues extended beyond provider reliability. As KV traffic increased, the total number of IOPS exceeded what we could write to local cache infrastructure, forcing us to rely on traditional caching approaches when data was fetched from origin storage. This shift exposed additional consistency edge cases that hadn't been apparent at smaller scales, as the caching behavior became less predictable and more dependent on upstream provider performance characteristics.</p><p>Eventually, the combination of consistency issues, provider reliability disparities, and operational overhead led to a strategic decision to reduce complexity by moving to a single object storage provider earlier this year. This decision was made with awareness of the increased risk profile, but we believed the operational benefits outweighed the risks and viewed this as a temporary intermediate state while we developed our own storage infrastructure.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6ZNIELjE1IHxvygKmVnFEX/f942b19fc2eb20055c350f5a8d83ad60/image5.png" />
          </figure><p>Unfortunately, on June 12, 2025, that risk materialized when our remaining third-party cloud provider experienced a global outage, causing a high percentage of Workers KV requests to fail for a period that lasted over two hours. The cascading impact to customers and to other Cloudflare services was severe: Access failed all identity-based logins, Gateway proxy became unavailable, WARP clients couldn't connect, and dozens of other services experienced significant disruptions.</p>
    <div>
      <h2>Designing the Solution</h2>
      <a href="#designing-the-solution">
        
      </a>
    </div>
    <p>The immediate goal after the incident was clear: bring at least one other fully redundant provider online such that another single-provider outage would not bring KV down. The new provider needed to handle massive scale along several dimensions: hundreds of billions of key-value pairs, petabytes of data stored, millions of GET requests per second, tens of thousands of steady-state PUT/DELETE requests per second, and tens of gigabits per second of throughput—all with high availability and low single-digit millisecond internal latency.</p><p>One obvious option was to bring back the provider that we had disabled earlier in the year. However, we could not just flip the switch back. The infrastructure to run in the dual backend configuration on the prior third-party storage provider was gone and the code had experienced some bit rot, making it infeasible to quickly revert to the previous dual-provider setup. </p><p>Additionally, the other provider had frequently been a source of their own operational problems, with relatively high error rates and concerningly low request throughput limits, that made us hesitant to rely on it again. Ultimately, we decided that our second provider should be entirely owned and operated by Cloudflare.</p><p>The next option was to build directly on top of <a href="https://www.cloudflare.com/developer-platform/products/r2/">Cloudflare R2</a>. We already had a private beta version of Workers KV running on R2, but this experience helped us better understand Workers KV's unique storage requirements. Workers KV's traffic patterns are characterized by hundreds of billions of small objects with a median size of just 288 bytes—very different from typical object storage workloads that assume larger file sizes.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7HwNtmJvOyyLDKPtMwaSxh/035283328f33e80d37832762b994a97b/3.png" />
          </figure><p>For workloads dominated by sub-1KB objects at this scale, database storage becomes significantly more efficient and cost-effective than traditional object storage. When you need to store billions of very small values with minimal per-value overhead, a database is a natural architectural fit. We're working on optimizations for R2 such as inlining small objects with metadata to eliminate additional retrieval hops that will improve performance for small objects, but for our immediate needs, a database-backed solution offered the most promising path forward.</p><p>After thorough evaluation of possible options, we decided to use a distributed database already in production at Cloudflare. This same database is used behind the scenes by both R2 and Durable Objects, giving us several key advantages: we have deep in-house expertise and existing automation for deployment and operations, and we knew we could depend on its reliability and performance characteristics at scale.</p><p>We sharded data across multiple database clusters, each with three-way replication for durability and availability. This approach allows us to scale capacity horizontally while maintaining strong consistency guarantees within each shard. We chose to run multiple clusters rather than one massive system to ensure a smaller blast radius if any cluster becomes unhealthy and to avoid pushing the practical limits of single-cluster scalability as Workers KV continues to grow.</p>
    <div>
      <h2>Implementing the Solution</h2>
      <a href="#implementing-the-solution">
        
      </a>
    </div>
    <p>One immediate challenge that we ran into when implementing the system was connectivity. The SGW needed to communicate with database clusters running in our core datacenters, but databases typically use binary protocols over persistent TCP connections—not the HTTP-based communication patterns that work efficiently across our global network.</p><p>We built KV Storage Proxy (KVSP) to bridge this gap. KVSP is a service that provides an HTTP interface that our SGW can use while managing the complex database connectivity, authentication, and shard routing behind the scenes. KVSP stripes namespaces across multiple clusters using consistent hashing, preventing hotspotting where popular namespaces could overwhelm single clusters, eliminating noisy neighbor issues, and ensuring capacity limitations are distributed rather than concentrated.</p><p>The biggest downside of using a distributed database for Workers KV’s storage is that, while it excels at handling the small objects that dominate KV traffic, it is not optimal for the occasional large values of up to 25 MiB that some users store. Rather than compromise on either use case, we extended KVSP to automatically route larger objects to Cloudflare R2, creating a hybrid storage architecture that optimizes the backend choice based on object characteristics. From the perspective of SGW, this complexity is completely transparent—the same HTTP API works for all objects regardless of size.</p><p>We also restored our dual-provider capabilities between storage providers from KV’s prior architecture and adapted them to work well in tandem with the changes that had been made to KV’s implementation since it dropped down to a single provider. The modified system now operates by racing writes to both backends simultaneously, but returns success to the client as soon as the first backend confirms the write.</p><p>This improvement minimizes latency while ensuring durability across both systems. When one backend succeeds but the other fails—due to temporary network issues, rate limiting, or service degradation—the failed write is queued for background reconciliation, which serves as part of our synchronization machinery that is described in more detail below.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2rUmyvzl6LsWdsjoD8H7vv/23d25498c1dc01cc3df84d92fa2c500e/image6.png" />
          </figure>
    <div>
      <h2>Deploying the Solution</h2>
      <a href="#deploying-the-solution">
        
      </a>
    </div>
    <p>With the hybrid architecture implemented, we began a careful rollout process designed to validate the new system while maintaining service availability.</p><p>The first step was introducing background writes from SGW to the new Cloudflare backend. This allowed us to validate write performance and error rates under real production load without affecting read traffic or user experience. It also was a necessary step in copying all data over to the new backend.</p><p>Next, we copied existing data from the third-party provider to our new backend running on Cloudflare infrastructure, routing the data through KVSP. This brought us to a critical milestone: we were now in a state where we could manually failover all operations to the new backend within minutes in the event of another provider outage. The single point of failure that caused the June incident had been eliminated.</p><p>With confidence in the failover capability, we began enabling our first namespaces in active-active mode, starting with internal Cloudflare services where we had sophisticated monitoring and deep understanding of the workloads. We dialed up traffic very slowly, carefully comparing results between backends. The fact that SGW could see responses from both backends asynchronously—after already returning a response to the user—allowed us to perform detailed comparisons and catch any discrepancies without impacting user-facing latency.</p><p>During testing, we discovered an important consistency regression compared to our single provider setup, which caused us to briefly roll back the change to put namespaces in active-active mode. While Workers KV<a href="https://developers.cloudflare.com/kv/concepts/how-kv-works/"> <u>is eventually consistent by design</u></a>, with changes taking up to 60 seconds to propagate globally as cached versions time out, we had inadvertently regressed read-your-own-write (RYOW) consistency for requests routed through the same Cloudflare point of presence.</p><p>In the previous dual provider active-active setup, RYOW was provided within each PoP because we wrote PUT operations directly to a local cache instead of relying on the traditional caching system in front of upstream storage. However, KV throughput had outscaled the number of IOPS that the caching infrastructure could support, so we could no longer rely on that approach. This wasn't a documented property of Workers KV, but it is behavior that some customers have come to rely on in their applications.</p><p>To understand the scope of this issue, we created an adversarial test framework designed to maximize the likelihood of hitting consistency edge cases by rapidly interspersing reads and writes to a small set of keys from a handful of locations around the world. This framework allowed us to measure the percentage of reads where we observed a violation of RYOW consistency—scenarios where a read immediately following a write from the same point of presence would return stale data instead of the value that was just written. This allowed us to design and verify a new approach to how KV populates and invalidates data in cache, which restored the RYOW behavior that customers expect while maintaining the performance characteristics that make Workers KV effective for high-read workloads.</p>
    <div>
      <h2>How KV Maintains Consistency Across Multiple Backends</h2>
      <a href="#how-kv-maintains-consistency-across-multiple-backends">
        
      </a>
    </div>
    <p>With writes racing to both backends and reads potentially returning different results, maintaining data consistency across independent storage providers requires a sophisticated multi-layered approach. While the details have evolved over time, KV has always taken the same basic approach, consisting of three complementary mechanisms that work together to reduce the likelihood of inconsistencies and minimize the window for data divergence.</p><p>The first line of defense happens during write operations. When SGW sends writes to both backends simultaneously, we treat the write as successful as soon as either provider confirms persistence. However, if a write succeeds on one provider but fails on the other—due to network issues, rate limiting, or temporary service degradation—the failed write is captured and sent to a background reconciliation system. This system deduplicates failed keys and initiates a synchronization process to resolve the inconsistency.</p><p>The second mechanism activates during read operations. When SGW races reads against both providers and notices different results, it triggers the same background synchronization process. This helps ensure that keys that become inconsistent are brought back into alignment when first accessed rather than remaining divergent indefinitely.</p><p>The third layer consists of background crawlers that continuously scan data across both providers, identifying and fixing any inconsistencies missed by the previous mechanisms. These crawlers also provide valuable data on consistency drift rates, helping us understand how frequently keys slip through the reactive mechanisms and address any underlying issues.</p><p>The synchronization process itself relies on version metadata that we attach to every key-value pair. Each write automatically generates a new version consisting of a high-precision timestamp plus a random nonce, stored alongside the actual data. When comparing values between providers, we can determine which version is newer based on these timestamps. The newer value is then copied to the provider with the older version.</p><p>In rare cases where timestamps are within milliseconds of each other, clock skew could theoretically cause incorrect ordering, though given the tight bounds we maintain on our clocks through<a href="https://www.cloudflare.com/time/"> <u>Cloudflare Time Services</u></a> and typical write latencies, such conflicts would only occur with nearly simultaneous overlapping writes.</p><p>To prevent data loss during synchronization, we use conditional writes that verify that the last timestamp is older before writing instead of blindly overwriting values. This allows us to avoid introducing new inconsistency issues in cases where requests in close proximity succeed to different backends and the synchronization process copies older values over newer values.</p><p>Similarly, we can’t just delete data when the user requests it because if the delete only succeeded to one backend, the synchronization process would see this as missing data and copy it from the other backend. Instead, we overwrite the value with a tombstone that has a newer timestamp and no actual data. Only after both providers have the tombstone do we proceed with actually removing the keys from storage.</p><p>This layered consistency architecture doesn't guarantee strong consistency, but in practice it does eliminate most mismatches between backends while maintaining a performance profile that makes Workers KV attractive for latency-sensitive, high-read workloads while also providing high availability in the case of any backend errors. In distributed systems terms, KV chooses availability (AP) over consistency (CP) in the <a href="https://en.wikipedia.org/wiki/CAP_theorem"><u>CAP theorem</u></a>, and more interestingly also chooses latency over consistency in the absence of a partition, meaning it’s PA/EL under the <a href="https://en.wikipedia.org/wiki/PACELC_design_principle"><u>PACELC theorem</u></a>. Most inconsistencies are resolved within seconds through the reactive mechanisms, while the background crawlers ensure that even edge cases are typically corrected over time.</p><p>The above description applies to both our historical dual-provider setup and today's implementation, but two key improvements in the current architecture lead to significantly better consistency outcomes. First, KVSP maintains a much lower steady-state error rate compared to our previous third-party providers, reducing the frequency of write failures that create inconsistencies in the first place. Second, we now race all reads against both backends, whereas the previous system optimized for cost and latency by preferentially routing reads to a single provider after an initial learning period.</p><p>In the original dual-provider architecture, each SGW instance would initially race reads against both providers to establish baseline performance characteristics. Once an instance determined that one provider consistently outperformed the other for its geographic region, it would route subsequent reads exclusively to the faster provider, only falling back to the slower provider when the primary experienced failures or abnormal latency. While this approach effectively controlled third-party provider costs and optimized read performance, it created a significant blind spot in our consistency detection mechanisms—inconsistencies between providers could persist indefinitely if reads were consistently served from only one backend.</p>
    <div>
      <h2>Results: Performance and Availability Gains</h2>
      <a href="#results-performance-and-availability-gains">
        
      </a>
    </div>
    <p>With these consistency mechanisms in place and our careful rollout strategy validated through internal services, we continued expanding active-active operation to additional namespaces across both internal and external workloads, and we were thrilled with what we saw. Not only did the new architecture provide the increased availability we needed for Workers KV, it also delivered significant performance improvements.</p><p>These performance gains were particularly pronounced in Europe, where our new storage backend is located, but the benefits extended far beyond what geographic locality alone could explain. The internal latency improvements compared to the third-party object store we were writing to in parallel were remarkable.</p><p>For example, p99 internal latency for reads to KVSP were below 5 milliseconds. For comparison, non-cached reads to the third-party object store from our closest location—after normalizing for transit time to create an apples-to-apples comparison—were typically around 80ms at p50 and 200ms at p99.</p><p>The graphs below show the closest thing that we can get to an apples-to-apples comparison: our observed internal latency for requests to KVSP compared with observed latency for requests that are cache misses and end up being forwarded to the external service provider from the closest point of presence, which includes an additional 5-10 milliseconds of request transit time.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6MtjLi4ajLQTTcqQ77Qgm0/10c9b30285d2725f6bf94a519c2d3a78/5.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3zOE8nlQm46xejHuC6ns7b/780bdba6913e45b4332841cfb287400c/6.png" />
          </figure><p>These performance improvements translated directly into faster response times for the many internal Cloudflare services that depend on Workers KV, creating cascading benefits across our platform. The database-optimized storage proved particularly effective for the small object access patterns that dominate Workers KV traffic.</p><p>After seeing these positive results, we continued expanding the rollout, copying data and enabling groups of namespaces for both internal and external customers. The combination of improved availability and better performance validated our architectural approach and demonstrated the value of building critical infrastructure on our own platform.</p>
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Our immediate plans focus on expanding this hybrid architecture to provide even greater resilience and performance for Workers KV. We're rolling out the KVSP solution to additional locations, creating a truly global distributed backend that can serve traffic entirely from our own infrastructure while also working to further improve how quickly we reach consistency between providers and in cache after writes.</p><p>Our ultimate goal is to eliminate our remaining third-party storage dependency entirely, achieving full infrastructure independence for Workers KV. This will remove the external single points of failure that led to the June incident while giving us complete control over the performance and reliability characteristics of our storage layer.</p><p>Beyond Workers KV, this project has demonstrated the power of hybrid architectures that combine the best aspects of different storage technologies. The patterns we've developed—using KVSP as a translation layer, automatically routing objects based on size characteristics, and leveraging our existing database expertise—can be leveraged by other services that need to balance global scale with strong consistency requirements. The journey from a single-provider setup to a resilient hybrid architecture running on Cloudflare infrastructure demonstrates how thoughtful engineering can turn operational challenges into competitive advantages. With dramatically improved performance and active-active redundancy, Workers KV is well positioned to serve as an even more reliable foundation for the growing set of customers that depend on it.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers KV]]></category>
            <category><![CDATA[Post Mortem]]></category>
            <guid isPermaLink="false">6Cd705JORQK737rDeTEDX8</guid>
            <dc:creator>Alex Robinson</dc:creator>
            <dc:creator>Tyson Trautmann</dc:creator>
        </item>
        <item>
            <title><![CDATA[We made Workers KV up to 3x faster — here’s the data]]></title>
            <link>https://blog.cloudflare.com/faster-workers-kv/</link>
            <pubDate>Thu, 26 Sep 2024 13:00:00 GMT</pubDate>
            <description><![CDATA[ Speed is a critical factor that dictates Internet behavior. Every additional millisecond a user spends waiting for your web page to load results in them abandoning your website.  ]]></description>
            <content:encoded><![CDATA[ <p>Speed is a critical factor that dictates Internet behavior. Every additional millisecond a user spends waiting for your web page to load results in them abandoning your website. The old adage remains as true as ever: <a href="https://www.cloudflare.com/learning/performance/more/website-performance-conversion-rates/"><u>faster websites result in higher conversion rates</u></a>. And with such outcomes tied to Internet speed, we believe a faster Internet is a better Internet.</p><p>Customers often use <a href="https://developers.cloudflare.com/kv/"><u>Workers KV</u></a> to provide <a href="https://developers.cloudflare.com/workers/"><u>Workers</u></a> with key-value data for configuration, routing, personalization, experimentation, or serving assets. Many of Cloudflare’s own products rely on KV for just this purpose: <a href="https://developers.cloudflare.com/pages"><u>Pages</u></a> stores static assets, <a href="https://developers.cloudflare.com/cloudflare-one/policies/access/"><u>Access</u></a> stores authentication credentials, <a href="https://developers.cloudflare.com/ai-gateway/"><u>AI Gateway</u></a> stores routing configuration, and <a href="https://developers.cloudflare.com/images/"><u>Images</u></a> stores configuration and assets, among others. So KV’s speed affects the latency of every request to an application, throughout the entire lifecycle of a user session. </p><p>Today, we’re announcing up to 3x faster KV hot reads, with all KV operations faster by up to 20ms. And we want to pull back the curtain and show you how we did it. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/67VzWOTRpMd9Dbc6ZM7M4M/cefb1d856344d9c963d4adfbe1b32fba/BLOG-2518_2.png" />
          </figure><p><sup><i>Workers KV read latency (ms) by percentile measured from Pages</i></sup></p>
    <div>
      <h2>Optimizing Workers KV’s architecture to minimize latency</h2>
      <a href="#optimizing-workers-kvs-architecture-to-minimize-latency">
        
      </a>
    </div>
    <p>At a high level, Workers KV is itself a Worker that makes requests to central storage backends, with many layers in between to properly cache and route requests across Cloudflare’s network. You can rely on Workers KV to support operations made by your Workers at any scale, and KV’s architecture will seamlessly handle your required throughput. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3pcw5pO2eeGJ1RriESJFSB/651fe26718f981eb741ad2ffb2f288c9/BLOG-2518_3.png" />
          </figure><p><sup><i>Sequence diagram of a Workers KV operation</i></sup></p><p>When your Worker makes a read operation to Workers KV, your Worker establishes a network connection within its Cloudflare region to KV’s Worker. The KV Worker then accesses the <a href="https://developers.cloudflare.com/workers/runtime-apis/cache/"><u>Cache API</u></a>, and in the event of a cache miss, retrieves the value from the storage backends. </p><p>Let’s look one level deeper at a simplified trace: </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7mCpF8NRgSg70p8VNTFXu8/4cabdae18285575891f49a5cd34c9ab8/BLOG-2518_4.png" />
          </figure><p><sup><i>Simplified trace of a Workers KV operation</i></sup></p><p>From the top, here are the operations completed for a KV read operation from your Worker:</p><ol><li><p>Your Worker makes a connection to Cloudflare’s network in the same data center. This incurs ~5 ms of network latency.</p></li><li><p>Upon entering Cloudflare’s network, a service called <a href="https://blog.cloudflare.com/upgrading-one-of-the-oldest-components-in-cloudflare-software-stack/"><u>Front Line (FL)</u></a> is used to process the request. This incurs ~10 ms of operational latency.</p></li><li><p>FL proxies the request to the KV Worker. The KV Worker does a cache lookup for the key being accessed. This, once again, passes through the Front Line layer, incurring an additional ~10 ms of operational latency.</p></li><li><p>Cache is stored in various backends within each region of Cloudflare’s network. A service built upon <a href="https://blog.cloudflare.com/pingora-open-source/"><u>Pingora</u></a>, our open-sourced Rust framework for proxying HTTP requests, routes the cache lookup to the proper cache backend.</p></li><li><p>Finally, if the cache lookup is successful, the KV read operation is resolved. Otherwise, the request reaches our storage backends, where it gets its value.</p></li></ol><p>Looking at these flame graphs, it became apparent that a major opportunity presented itself to us: reducing the FL overhead (or eliminating it altogether) and reducing the cache misses across the Cloudflare network would reduce the latency for KV operations.</p>
    <div>
      <h3>Bypassing FL layers between Workers and services to save ~20ms</h3>
      <a href="#bypassing-fl-layers-between-workers-and-services-to-save-20ms">
        
      </a>
    </div>
    <p>A request from your Worker to KV doesn’t need to go through FL. Much of FL’s responsibility is to process and route requests from outside of Cloudflare — that’s more than is needed to handle a request from the KV binding to the KV Worker. So we skipped the Front Line altogether in both layers.</p><div>
  
</div>
<p><sup><i>Reducing latency in a Workers KV operation by removing FL layers</i></sup></p><p>To bypass the FL layer from the KV binding in your Worker, we modified the KV binding to connect directly to the KV Worker within the same Cloudflare location. Within the Workers host, we configured a C++ subpipeline to allow code from bindings to establish a direct connection with the proper routing configuration and authorization loaded. </p><p>The KV Worker also passes through the FL layer on its way to our internal <a href="https://blog.cloudflare.com/pingora-open-source/"><u>Pingora</u></a> service. In this case, we were able to use an internal Worker binding that allows Workers for Cloudflare services to bind directly to non-Worker services within Cloudflare’s network. With this fix, the KV Worker sets the proper cache control headers and establishes its connection to Pingora without leaving the network. </p><p>Together, both of these changes reduced latency by ~20 ms for every KV operation. </p>
    <div>
      <h3>Implementing tiered cache to minimize requests to storage backends</h3>
      <a href="#implementing-tiered-cache-to-minimize-requests-to-storage-backends">
        
      </a>
    </div>
    <p>We also optimized KV’s architecture to reduce the amount of requests that need to reach our centralized storage backends. These storage backends are further away and incur network latency, so improving the cache hit rate in regions close to your Workers significantly improves read latency.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1791aSxXPH1lgOIr3RQrLz/1685f6a57d627194e76cb657cd22ddd8/BLOG-2518_5.png" />
          </figure><p><sup><i>Workers KV uses Tiered Cache to resolve operations closer to your users</i></sup></p><p>To accomplish this, we used <a href="https://developers.cloudflare.com/cache/how-to/tiered-cache/#custom-tiered-cache"><u>Tiered Cache</u></a>, and implemented a cache topology that is fine-tuned to the usage patterns of KV. With a tiered cache, requests to KV’s storage backends are cached in regional tiers in addition to local (lower) tiers. With this architecture, KV operations that may be cache misses locally may be resolved regionally, which is especially significant if you have traffic across an entire region spanning multiple Cloudflare data centers. </p><p>This significantly reduced the amount of requests that needed to hit the storage backends, with ~30% of requests resolved in tiered cache instead of storage backends.</p>
    <div>
      <h2>KV’s new architecture</h2>
      <a href="#kvs-new-architecture">
        
      </a>
    </div>
    <p>As a result of these optimizations, KV operations are now simplified:</p><ol><li><p>When you read from KV in your Worker, the <a href="https://developers.cloudflare.com/kv/concepts/kv-bindings/"><u>KV binding</u></a> binds directly to KV’s Worker, saving 10 ms. </p></li><li><p>The KV Worker binds directly to the Tiered Cache service, saving another 10 ms. </p></li><li><p>Tiered Cache is used in front of storage backends, to resolve local cache misses regionally, closer to your users.</p></li></ol>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2cW0MsOH120GKUAlIUvDR4/7f574632ee81d3b955ed99bf87d86fa2/BLOG-2518_6.png" />
          </figure><p><sup><i>Sequence diagram of KV operations with new architecture</i></sup></p><p>In aggregate, these changes significantly reduced KV’s latency. 

The impact of the direct binding to cache is clearly seen in the wall time of the KV Worker, given this value measures the duration of a retrieval of a key-value pair from cache. The 90th percentile of all KV Worker invocations now resolve in less than 12 ms — before the direct binding to cache, that was 22 ms. That’s a 10 ms decrease in latency. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1UmcRB0Afk8mHig2DrThsh/d913cd33a28c1c2b093379238a90551c/BLOG-2518_7.png" />
          </figure><p><sup><i>Wall time (ms) within the KV Worker by percentile</i></sup></p><p>These KV read operations resolve quickly because the data is cached locally in the same Cloudflare location. But what about reads that aren’t resolved locally? ~30% of these resolve regionally within the tiered cache. Reads from tiered cache are up to 100 ms faster than when resolved at central storage backends, once again contributing to making KV reads faster in aggregate.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Gz2IxlcNuDDRp3MhC4m40/ee364b710cec4332a5c307a784f34fa4/BLOG-2518_8.png" />
          </figure><p><sup><i>Wall time (ms) within the KV Worker for tiered cache vs. storage backends reads</i></sup></p><p>These graphs demonstrate the impact of direct binding from the KV binding to cache, and tiered cache. To see the impact of the direct binding from a Worker to the KV Worker, we need to look at the latencies reported by Cloudflare products that use KV.</p><p><a href="https://developers.cloudflare.com/pages/"><u>Cloudflare Pages</u></a>, which serves static assets like HTML, CSS, and scripts from KV, saw load times for fetching assets improve by up to 68%. Workers asset hosting, which we also announced as part of today’s <a href="http://blog.cloudflare.com/builder-day-2024-announcements"><u>Builder Day announcements</u></a>, gets this improved performance from day 1.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/67VzWOTRpMd9Dbc6ZM7M4M/cefb1d856344d9c963d4adfbe1b32fba/BLOG-2518_2.png" />
          </figure><p><sup><i>Workers KV read operation latency measured within Cloudflare Pages by percentile</i></sup></p><p><a href="https://developers.cloudflare.com/queues/"><u>Queues</u></a> and <a href="https://developers.cloudflare.com/cloudflare-one/applications/"><u>Access</u></a> also saw their latencies for KV operations drop, with their KV read operations now 2-5x faster. These services rely on Workers KV data for configuration and routing data, so KV’s performance improvement directly contributes to making them faster on each request. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Gz2IxlcNuDDRp3MhC4m40/ee364b710cec4332a5c307a784f34fa4/BLOG-2518_8.png" />
          </figure><p><sup><i>Workers KV read operation latency measured within Cloudflare Queues by percentile</i></sup></p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1HFapaO1Gv09g9VlODrLAu/56d39207fb3dfefe258fa5e8cb8bd67b/BLOG-2518_10.png" />
          </figure><p><sup><i>Workers KV read operation latency measured within Cloudflare Access by percentile</i></sup></p><p>These are just some of the direct effects that a faster KV has had on other services. Across the board, requests are resolving faster thanks to KV’s faster response times. </p><p>And we have one more thing to make KV lightning fast. </p>
    <div>
      <h3>Optimizing KV’s hottest keys with an in-memory cache </h3>
      <a href="#optimizing-kvs-hottest-keys-with-an-in-memory-cache">
        
      </a>
    </div>
    <p>Less than 0.03% of keys account for nearly half of requests to the Workers KV service across all namespaces. These keys are read thousands of times per second, so making these faster has a disproportionate impact. Could these keys be resolved within the KV Worker without needing additional network hops?</p><p>Almost all of these keys are under 100 KB. At this size, it becomes possible to use the in-memory cache of the KV Worker — a limited amount of memory within the <a href="https://developers.cloudflare.com/workers/reference/how-workers-works/#isolates"><u>main runtime process</u></a> of a Worker sandbox. And that’s exactly what we did. For the highest throughput keys across Workers KV, reads resolve without even needing to leave the Worker runtime process.</p><div>
  
</div>
<p><sup><i>Sequence diagram of KV operations with the hottest keys resolved within an in-memory cache</i></sup></p><p>As a result of these changes, KV reads for these keys, which represent over 40% of Workers KV requests globally, resolve in under a millisecond. We’re actively testing these changes internally and expect to roll this out during October to speed up the hottest key-value pairs on Workers KV.</p>
    <div>
      <h2>A faster KV for all</h2>
      <a href="#a-faster-kv-for-all">
        
      </a>
    </div>
    <p>Most of these speed gains are already enabled with no additional action needed from customers. Your websites that are using KV are already responding to requests faster for your users, as are the other Cloudflare services using KV under the hood and the countless websites that depend upon them. </p><p>And we’re not done: we’ll continue to chase performance throughout our stack to make your websites faster. That’s how we’re going to move the needle towards a faster Internet. </p><p>To see Workers KV’s recent speed gains for your own KV namespaces, head over to your dashboard and check out the <a href="https://developers.cloudflare.com/kv/observability/metrics-analytics/"><u>new KV analytics</u></a>, with latency and cache status detailed per namespace.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cloudflare Workers KV]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Performance]]></category>
            <guid isPermaLink="false">76i5gcdv0wbMNnwa7E17MR</guid>
            <dc:creator>Thomas Gauvin</dc:creator>
            <dc:creator>Rob Sutter</dc:creator>
            <dc:creator>Andrew Plunk</dc:creator>
        </item>
        <item>
            <title><![CDATA[How Picsart leverages Cloudflare's Developer Platform to build globally performant services]]></title>
            <link>https://blog.cloudflare.com/picsart-move-to-workers-huge-performance-gains/</link>
            <pubDate>Wed, 03 Apr 2024 13:00:02 GMT</pubDate>
            <description><![CDATA[ Picsart, one of the world’s largest digital creation platforms, encountered performance challenges in catering to its global audience. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Delivering great user experiences with a global user base can be challenging. While serving requests quickly when you start out in a local market is straightforward, doing so for a global audience is much more difficult. Why? Even under optimal conditions, you cannot be faster than the speed of light, which brings single data center solutions to their performance limits.</p><p>In this post, we will cover how Picsart improved the performance of one of its most critical services by moving from a centralized architecture to a globally distributed service built on Cloudflare. Our serverless compute platform, <a href="https://developers.cloudflare.com/workers/">Workers</a>, distributed throughout <a href="https://www.cloudflare.com/network/">310+ cities</a> around the world, and our globally distributed <a href="https://developers.cloudflare.com/kv/">Workers KV</a> storage allowed them to improve their performance significantly and drive real business impact.</p>
    <div>
      <h2>Success driven by data-driven insights</h2>
      <a href="#success-driven-by-data-driven-insights">
        
      </a>
    </div>
    <p><a href="https://picsart.com">Picsart</a> is one of the world’s largest digital creation platforms and a long-standing Cloudflare partner. At its core, an advanced tech stack powers its comprehensive features, including AI-driven photo and video editing tools and community-driven content sharing. With its infrastructure spanning across multiple cloud environments and on-prem deployments, Picsart is engineered to handle billions of daily requests from its huge mobile and web user base and API integrations. For over a decade, Cloudflare has been integral to Picsart, providing support for performant content delivery and securing its digital ecosystem.  </p><p>Similar to many other tech giants, Picsart approaches product development in a data-driven way. At the core of the innovation is Picsart's remote configuration and experimentation platform, which enables product managers, UX researchers, and others to segment their user base into different test groups. These test groups might get to see slightly different implementations of features or designs of the Picsart app. Users might also get early access to experimental features or see different in-app promotions. In combination with constant monitoring of relevant KPIs, this allows for informed product decisions based on user preference and business impact.</p><p>On each app start, the client device sends a request to the remote configuration service for the latest setup tailored to the user's session. The assignment of experiments relies on factors like the operating system and previous sessions, making each request unique and uncachable. Picsart's app showcases extensive remote configuration capabilities, enabling adjustments to nearly every element. This results in a response containing a 1.5 MB configuration file for mobile clients. While the long-term solution is to reduce the file size, which has grown over time as more teams adopted the powerful service, this is not possible in the near or mid-term as it requires a major rewrite of all clients.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5xgUdo4wBKIbu0YzxXotHi/3e855c5adc807cb115a3a59fcaf1893b/Screenshot-2024-04-01-at-2.47.52-PM.png" />
            
            </figure><p>This setup request is blocking in the "hot path" during app start, as the results of this request will decide how the app itself looks and behaves. Hence, performance is critical. To ensure users are not waiting for too long, Picsart apps will wait for 1500ms on mobile for the request to complete – if it does not, the user will not be assigned a test group and the app will fallback to default settings.</p>
    <div>
      <h2>The clock is ticking</h2>
      <a href="#the-clock-is-ticking">
        
      </a>
    </div>
    <p>While a 1500ms round trip time seems like a sufficiently large time budget, the data suggested otherwise. Before the improvements were implemented, a staggering 50% of devices could not complete the requests in time. How come? In these 1.5 seconds the following steps need to complete:</p><ol><li><p>The request must travel from the users’ devices to the centralized backend servers</p></li><li><p>The server processes the request based on dozens of user attributes provided in the request and thousands of defined remote configuration variations, running experiments, and segments metadata. Using all the info, the server selects the right variation of each remote setting entry and builds the response payload.</p></li><li><p>The response must travel from the centralized backend servers to the user devices.</p></li></ol>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/54ZFnOPI3zMI2FNmf5uOwU/18cc25a5ee55887e590128be733b7774/image1.png" />
            
            </figure><p>Looking at the data, it was clear to the Picsart team that their backend service was already well-optimized, taking only 30 milliseconds, a tiny fraction of the available time budget, to process each of the billions of monthly requests. The bulk of the request time came from <a href="https://www.cloudflare.com/learning/performance/glossary/what-is-latency/">network latency</a>. Especially with mobile devices, last mile performance can be very volatile, eating away a significant amount of the available time budget. Not only that, but the data was clear: users closer to the origin server had a much higher chance of making the round trip in time versus users out of region. It quickly became obvious that Picsart, fueled by its global success, had outgrown a single-region setup.</p>
    <div>
      <h2>To the drawing board</h2>
      <a href="#to-the-drawing-board">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4XTH17l3JlPKoeV7i0MO5p/501ffe57026f80797089df6ddd627892/image6.png" />
            
            </figure><p>A solution that comes to mind would be to replicate the existing cloud infrastructure in multiple regions and use global load balancing to minimize the distance a request needs to travel. However, this introduces significant overhead and cost. On the infrastructure side, it is not only the additional compute instances and database clusters that incur cost, but also cross-region data transfer to keep data in sync. Moreover, technical teams would need to operate and monitor infrastructure in multiple regions, which can add a lot to the complexity and cognitive load, leading to decreased development velocity and productivity loss.</p><p>Picsart instead looked to Cloudflare – we already had a long-lasting relationship for <a href="https://www.cloudflare.com/application-services/">Application Performance and Security</a>, and they aimed to use our <a href="https://www.cloudflare.com/developer-platform/">Developer Platform</a> to tackle the problem.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7mRjLlmOaeyNOM6eUUudvZ/fc39dee9770bf76bef154efd52cdb48b/image2.png" />
            
            </figure><p>Workers and Workers KV seemed like the ideal solution. Both compute and data are globally distributed in <a href="https://www.cloudflare.com/network">310+ locations around the world</a>, resulting in a shorter distance between end users and the experimentation service. Not only that, but Cloudflare's global-by-default approach allows for deployment with minimal overhead, and in contrast to other considered solutions, no additional fees to distribute the data around the globe.</p>
    <div>
      <h2>No race without a clock</h2>
      <a href="#no-race-without-a-clock">
        
      </a>
    </div>
    <p>The objective for the refactor of the experimentation service was to increase the share of devices that successfully receive experimentation configuration within the set time budget.</p><p>But how to measure success? While synthetic testing can be useful in many situations, Picsart opted to come up with another clever solution:</p><p>During development, the Picsart engineers had already added a testing endpoint to the web and mobile versions of their app that sends a duplicate request to the new endpoint, discarding the response and swallowing all potential errors. This allows them to collect timing data based on real-user metrics without impacting the app's performance and reliability.</p><p>A simplified version of this pattern for a web client could look like this:</p>
            <pre><code>// API endpoint URLs
const prodUrl = 'https://prod.example.com/';
const devUrl = 'https://new.example.com/';

// Function to collect metrics
const collectMetrics = (duration) =&gt; {
    console.log('Request duration:', duration);
    // …
};

// Function to fetch data from an endpoint and call collectMetrics
const fetchData = async (url, options) =&gt; {
    const startTime = performance.now();
    
    try {
        const response = await fetch(url, options);
        const endTime = performance.now();
        const duration = endTime - startTime;
        collectMetrics(duration);
        return await response.json();
    } catch (error) {
        console.error('Error fetching data:', error);
    }
};

// Fetching data from both endpoints
async function fetchDataFromBothEndpoints() {
    try {
        const result1 = await fetchData(prodUrl, { method: 'POST', ... });
        console.log('Result from endpoint 1:', result1);

        // Fetching data from the second endpoint without awaiting its completion
        fetchData(devUrl, { method: 'POST', ... });
    } catch (error) {
        console.error('Error fetching data from both endpoints:', error);
    }
}

fetchDataFromBothEndpoints();</code></pre>
            <p>Using existing data analytics tools, Picsart was able to analyze the performance of the new services from day one, starting with a dummy endpoint and a 'hello world' response. And with that a v0 was created that did not have the correct logic just yet, but simulated reading multiple values from KV and returning a response of a realistic size back to the end user.</p>
    <div>
      <h2>The need for a do-over</h2>
      <a href="#the-need-for-a-do-over">
        
      </a>
    </div>
    <p>In the initial phase, outcomes fell short of expectations. Surprisingly, requests were slower despite the service's proximity to end users. What caused this setback?  Subsequent investigation unveiled multiple culprits and design patterns in need for optimization.</p>
    <div>
      <h3>Data segmentation</h3>
      <a href="#data-segmentation">
        
      </a>
    </div>
    <p>The previous, stateful solution operated on a single massive "blob" of data exceeding 100MB in value. Loading this into memory incurred a few seconds of initial startup time, but once the VM completed the task, request handling was fast, benefiting from the readily available data in memory.</p><p>However, this approach doesn't seamlessly transition to the serverless realm. Unlike long-running VMs, Worker isolates have short lifespans. Repeatedly parsing large JSON objects led to prolonged compute durations. Simply parsing four KV entries of 25MB each (KV maximum value size is 25MB) on each request was not a feasible option.</p><p>The Picsart team went back to solution design and embarked on a journey to optimize their system's execution time, resulting in a series of impactful improvements.</p><p>The fundamental insight that guided the solution was the unnecessary overhead that was involved in loading and parsing data irrelevant to the user's specific context. The 100MB configuration file contained configurations for all platforms and locations worldwide – a setup that was far from efficient in a globally distributed, serverless compute environment. For instance, when processing requests from users in the United States, there was no need to fetch configurations targeted for users in other countries, or for different platforms.</p><p>To address this inefficiency, the Picsart team stored the configuration of each platform and country in separate KV records. This targeted strategy meant that for a request originating from a US user on an Android device, our system would only fetch and parse the KV record specific to Android users in the US, thereby excluding all irrelevant data. This resulted in approximately 600 KV records, each with a maximum size of 10MB. While this leads to data duplication on the KV storage side, it decreases the amount of data that needs to be parsed upon request. As Cloudflare operates in over 120 countries around the world, only a subset of records were needed in each location. Hence, the increase in cardinality had minimal impact on KV cache performance, as demonstrated by more than 99.5% of KV reads being served from local cache.</p><table>
  <tr>
    <td><b>Key</b></td>
    <td><b>Size</b></td>
  </tr>
  <tr>
    <td>com.picsart.studio_apple_us.json</td>
    <td>6.1MB</td>
  </tr>
  <tr>
    <td>com.picsart.studio_apple_de.json</td>
    <td>6.1MB</td>
  </tr>
  <tr>
    <td>com.picsart.studio_android_us.json</td>
    <td>5.9MB</td>
  </tr>  
  <tr>
    <td>…</td>
    <td>…</td>
  </tr>  
</table>
<small>After (simplified)</small><br /><br /><p>This approach was a significant move for Picsart as they transitioned from a regional cloud setup to Cloudflare's globally distributed connectivity cloud. By serving data from close proximity to end user locations, they were able to combat the high network latency from their previous setup. This strategy radically transformed the data-handling process. Which unlocked two major benefits:</p><ul><li><p>Performance Gains**:** By ensuring that only the relevant subset of data is fetched and parsed based on the user's platform and geographical location, wall time and compute resources required for these operations could be significantly reduced.</p></li><li><p>Scalability and Flexibility: the granular segmentation of data enables effortless scaling of the service for new features or regional content. Adding support for new applications now only requires inserting new, standalone KV records in contrast to the previous solution where this would require increasing the size of the single record.</p></li></ul>
    <div>
      <h3>Immutable updates</h3>
      <a href="#immutable-updates">
        
      </a>
    </div>
    <p>Now that changes to the configuration were segmented by app, country, and platform, this also allowed for individual updates of the configuration in KV. KV storage showcases its best performance when records are updated infrequently but read very often. This pattern leverages <a href="https://developers.cloudflare.com/kv/reference/how-kv-works/">KV's fundamental design</a> to cache values at edge locations upon reads, ensuring that subsequent queries for the same record are swiftly served by local caches rather than requiring a trip back to KV's centralized data centers. This architecture is fundamental for minimizing latency and maximizing the speed of data retrieval across a globally distributed platform.</p><p>A crucial requirement for Picsart’s experimentation system was the ability to propagate updates of remote configuration values immediately. Updating existing records would require very short cache TTLs and even the minimum KV cache TTL of 60 seconds was considered unacceptable for the dynamic nature of the feature flagging. Moreover, setting short <a href="https://www.cloudflare.com/learning/cdn/glossary/time-to-live-ttl/">TTLs</a> also impacts the <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cache-hit-ratio/">cache hit ratio</a> and the overall KV performance, specifically in regions with low traffic.</p><p>To reconcile the need for both rapid updates and efficient caching, Picsart adopted an innovative approach: making KV records immutable. Instead of modifying existing records, they opted to create new records with each configuration change. By appending the content hash to the KV key and writing new records after each update, Picsart ensured that each record was unique and immutable. This allowed them to leverage higher cache TTLs, as these records would never be updated.</p><table>
  <tr>
    <td><strong>Key</strong></td>
    <td><strong>TTL</strong></td>
  </tr>
  <tr>
    <td>com.picsart.studio_apple_us_b58b59.json</td>
    <td>86400s</td>
  </tr>
  <tr>
    <td>com.picsart.studio_apple_us_273678.json</td>
    <td>86400s</td>
  </tr>
  <tr>
    <td>-</td>
    <td>…</td>
  </tr>  
</table>
<small>After (simplified)</small><br /><br /><p>There was a catch, though. The service must now keep track of the correct KV keys to use. The Picsart team addressed this challenge by storing references to the latest KV keys in the environment variables of the Worker.</p><p>Each configuration change triggers a new KV pair to be written and the services' environment variables to be updated. As global Workers deployments take mere seconds, changes to the experimentation and configuration data are near-instantaneously globally available.</p>
    <div>
      <h3>JSON serialization &amp; alternatives</h3>
      <a href="#json-serialization-alternatives">
        
      </a>
    </div>
    <p>Following the previous improvements, the Picsart team made another significant discovery: only a small fraction of configuration data is needed to assign the experiments, while the remaining majority of the data comprises JSON values for the remote configuration payloads. While the service must deliver the latter in the response, the data is not required during the initial processing phase.</p><p>The initial implementation used <a href="https://developers.cloudflare.com/kv/api/read-key-value-pairs/">KV's get()</a> method to retrieve the configuration data with the parameter type=<code>json</code>, which converts the KV value to an object. This process is very compute-intensive compared to using the <code>get()</code> method with parameter type= <code>text</code>, which simply returns the value as a string. In the context of Picsart's data, the bulk of the CPU cycles were wasted on serializing JSON data that is not needed to perform the required business logic.</p><p>What if the data structure and code could be changed in such a way that only the data needed to assign experiments was parsed as JSON, while the configuration values were treated as text? Picsart went ahead with a new approach: splitting the KV records into two, creating a small 300 KB record for the metadata, which can be quickly parsed to an object, and a 9.7 MB record of configuration values. The extracted configuration values are delimited by newline characters. The respective line number is used as reference in the metadata entry, so that the respective configuration value for an experiment can be merged back into the payload response later.</p><div>
    <figure>
        <table>
            <colgroup><col></col></colgroup>
            <tbody>
                <tr>
                    <td>
                        <p>
                            <span><span>{</span></span>
                        </p>
                        <p>
                            <span><span>  "name": "shape_replace_items",</span></span>
                        </p>
                        <p>
                            <span><span>  "default_value": "&lt;large json object&gt;",</span></span>
                        </p>
                        <p>
                            <span><span>  "segments": [</span></span>
                        </p>
                        <p>
                            <span><span>    {</span></span>
                        </p>
                        <p>
                            <span><span>      "id": "f1244",</span></span>
                        </p>
                        <p>
                            <span><span>      "value": "&lt;Another json object json object&gt;"</span></span>
                        </p>
                        <p>
                            <span><span>    },</span></span>
                        </p>
                        <p>
                            <span><span>    {</span></span>
                        </p>
                        <p>
                            <span><span>      "id": "a2lfd",</span></span>
                        </p>
                        <p>
                            <span><span>      "value": "&lt;Yet another large json object&gt;"</span></span>
                        </p>
                        <p>
                            <span><span>    }</span></span>
                        </p>
                        <p>
                            <span><span>  ]</span></span>
                        </p>
                        <p>
                            <span><span>}</span></span>
                        </p>
                        <p>
                             
                        </p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p>
                            <span><i><span>Before: Metadata and Values in one JSON object (simplified)</span></i></span>
                        </p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p>
                             
                        </p>
                        <div>
                            <figure>
                                <table>
                                    <colgroup><col></col><col></col></colgroup>
                                    <tbody>
                                        <tr>
                                            <td>
                                                <p>
                                                    <span><span>// com.picsart.studio_apple_am_metadata</span></span>
                                                </p>
                                                <p>
                                                    <br />
                                                    <span><span>1 {</span></span>
                                                </p>
                                                <p>
                                                    <span><span>2   "name": "shape_replace_items",</span></span>
                                                </p>
                                                <p>
                                                    <span><span>3   "default_value": 1,</span></span>
                                                </p>
                                                <p>
                                                    <span><span>4   "segments": [</span></span>
                                                </p>
                                                <p>
                                                    <span><span>5     {</span></span>
                                                </p>
                                                <p>
                                                    <span><span>6       "id": "f1244",</span></span>
                                                </p>
                                                <p>
                                                    <span><span>7       "value": 2</span></span>
                                                </p>
                                                <p>
                                                    <span><span>8     },</span></span>
                                                </p>
                                                <p>
                                                    <span><span>9     {</span></span>
                                                </p>
                                                <p>
                                                    <span><span>10       "id": "a2lfd",</span></span>
                                                </p>
                                                <p>
                                                    <span><span>11      "value": 3</span></span>
                                                </p>
                                                <p>
                                                    <span><span>12     }</span></span>
                                                </p>
                                                <p>
                                                    <span><span>13   ]</span></span>
                                                </p>
                                                <p>
                                                    <span><span>14 }</span></span>
                                                </p>
                                            </td>
                                            <td>
                                                <p>
                                                    <span><span>// com.picsart.studio_apple_am_values</span></span>
                                                </p>
                                                <p>
                                                    <br />
                                                    <span><span>1 "&lt;large json object&gt;"</span></span>
                                                </p>
                                                <p>
                                                    <span><span>2 "&lt;Another json object&gt;"</span></span>
                                                </p>
                                                <p>
                                                    <span><span>3 "&lt;Yet another json object&gt;"</span></span>
                                                </p>
                                            </td>
                                        </tr>
                                    </tbody>
                                </table>
                            </figure>
                        </div>
                        <p>
                             
                        </p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p>
                            <span><i><span>After:  Metadata and Values are split (simplified)</span></i></span>
                        </p>
                    </td>
                </tr>
            </tbody>
        </table>
    </figure>
</div><p>After calculating the experiments and selecting the correct variants based solely on the small metadata entry, the service constructs a JSON string for the response containing placeholders for the actual values that reference the corresponding line numbers in the separate text file. To finalize the response, the server replaces the placeholders with the corresponding serialized JSON strings from the text file. This approach circumvents the need for parsing and re-serializing large JSON objects and helps to avoid a significant computational overhead.</p><p>Note that the process of parsing the metadata JSON and determining the correct experiments as well as the loading of the large file with configuration values are executed in parallel, saving additional milliseconds.</p><p>By minimizing the size of the JSON data that needed to be parsed and leveraging a more efficient method for constructing the final response, the Picsart team managed to not only reduce the response times but also optimize the compute resource usage. This approach reflects a broader principle applicable across the tech industry: that efficiency, particularly in serverless architectures, can often be dramatically improved by rethinking data structure and utilization.</p>
    <div>
      <h2>Getting a head start</h2>
      <a href="#getting-a-head-start">
        
      </a>
    </div>
    <p>The changes on the server-side, moving from a single region setup to Cloudflare’s global architecture, paid off massively. Median response times globally dropped by more than 1 second, which was already a huge improvement for the team. However, in looking at the new data, two more paths for client-side optimizations were found.</p><p>As the web and mobile app would call the service at startup, most of the time no active connections to the servers were alive and establishing that connection at request time costs valuable milliseconds.</p><p>For the web version, setting a <a href="https://developer.mozilla.org/en-US/docs/Web/HTML/Attributes/rel/preconnect">pre-connect header</a> on initial page load showed a positive impact. For the mobile app version, the Picsart team took it a step further. Investigation showed that before the connection could be established, three modules had to complete initialization: the error tracker, the HTTP client, and the SDK. Reordering of the modules to initialize the HTTP client first allowed for connection establishment in parallel to the initialization of the SDK and error tracker, again saving time. This resulted in another 200ms improvement for end users.</p>
    <div>
      <h2>Setting a new personal best</h2>
      <a href="#setting-a-new-personal-best">
        
      </a>
    </div>
    <p>The day had come, and it was time for the phased rollout, web first and the mobile apps second. With suspense, the team looked at the dashboards, and were pleasantly surprised. The rollout was successful and billions of requests were handled with ease.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1aNPXwkM6Nk2lWZaDmJ4NV/e9b3268e3d395a34295ff72231b60e39/image4.png" />
            
            </figure><p><i>Share of successfully delivered experiments</i></p><p>The result? The Picsart apps are loading faster than ever for millions of users worldwide, while the share of successfully delivered experiments increased from 50% to 85%. Median response time dropped from 1500 ms to 280 ms. The response time dropped to 70 ms on the web since the response size is smaller compared to mobile. This translates to a real business impact for Picsart as they can now deliver more personalized and data-driven experiences to even more of their users.</p>
    <div>
      <h2>A bright future ahead</h2>
      <a href="#a-bright-future-ahead">
        
      </a>
    </div>
    <p>Picsart is already thinking of the next generation of experimentation. To integrate with Cloudflare even further, the plan is to use Durable Objects to store hundreds of millions of user data records in a decentralized fashion, enabling even more powerful experiments without impacting performance. This is possible thanks to Durable Objects' underlying architecture that stores the user data in-region, close to the respective end user device.</p><p>Beyond that, Picsart’s experimentation team is also planning to onboard external B2B customers to their experimentation platform as Cloudflare's developer platform provides them with the scale and global network to handle more traffic and data with ease.</p>
    <div>
      <h2>Get started yourself</h2>
      <a href="#get-started-yourself">
        
      </a>
    </div>
    <p>If you’re also working on or with an application that would benefit from Cloudflare’s speed and scale, check out our developer <a href="https://developers.cloudflare.com/workers/">documentation</a> and <a href="https://developers.cloudflare.com/workers/tutorials/">tutorials</a>, and join our developer <a href="https://discord.cloudflare.com/">Discord</a> to get community support and inspiration.</p> ]]></content:encoded>
            <category><![CDATA[Guest Post]]></category>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Cloudflare Workers KV]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Connectivity Cloud]]></category>
            <guid isPermaLink="false">2gXle7Kq9oC5BbCmX6jlwD</guid>
            <dc:creator>Mark Dembo</dc:creator>
            <dc:creator>Shant Marouti</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing advanced session audit capabilities in Cloudflare One]]></title>
            <link>https://blog.cloudflare.com/introducing-advanced-session-audit-capabilities-in-cloudflare-one/</link>
            <pubDate>Thu, 16 Nov 2023 18:49:23 GMT</pubDate>
            <description><![CDATA[ Administrators can now easily audit all active user sessions and associated data used by their Cloudflare One policies. This enables the best of both worlds: extremely granular controls, while maintaining an improved ability to troubleshoot and diagnose ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4pp2wSikAqdX5rt2i33Ngg/6fcd4139b1c3c3f25146342db2ff3f22/image5.png" />
            
            </figure><p>The basis of Zero Trust is defining granular controls and authorization policies per application, user, and device. Having a system with a sufficient level of granularity to do this is crucial to meet both regulatory and security requirements. But there is a potential downside to so many controls: in order to troubleshoot user issues, an administrator has to consider a complex combination of variables across applications, user identity, and device information, which may require painstakingly sifting through logs.</p><p>We think there’s a better way — which is why, starting today, administrators can easily audit all active user sessions and associated data used by their Cloudflare One policies. This enables the best of both worlds: extremely granular controls, while maintaining an improved ability to troubleshoot and diagnose <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust</a> deployments in a single, simple control panel. Information that previously lived in a user’s browser or changed dynamically is now available to administrators without the need to bother an end user or dig into logs.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/vFhYqgPu1WXS7wrcp77u6/3169e065ed0e6780e67a218a5ae607c3/image4.png" />
            
            </figure>
    <div>
      <h3><b>A quick primer on application authentication and authorization</b></h3>
      <a href="#a-quick-primer-on-application-authentication-and-authorization">
        
      </a>
    </div>
    <p><i>Authentication</i> and <i>Authorization</i> are the two components that a Zero Trust policy evaluates before allowing a user access to a resource.</p><p><b>Authentication</b> is the process of verifying the identity of a user, device, or system. Common methods of <a href="https://www.cloudflare.com/learning/access-management/what-is-authentication/">authentication</a> include entering usernames and passwords, presenting a digital certificate, or even biometrics like a fingerprint or face scan. <a href="https://www.cloudflare.com/learning/access-management/what-is-multi-factor-authentication/">Multi-factor authentication (MFA)</a> requires two or more separate methods of authentication for enhanced security, like a hardware key in combination with a password.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/38l3l06Dy248HUhhQ9R3WC/57c17a8279ca8cec195a4c6f67ff9686/image6.png" />
            
            </figure><p><b>Authorization</b> is the process of granting or denying access to specific resources or permissions once an entity has been successfully authenticated. It defines what the authenticated entity can and cannot do within the system.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3jvZdrdbJ1ucOx5niglvaR/ecfcf31cc7e1c9831b8ba15d0ef76e75/image1-3.png" />
            
            </figure>
    <div>
      <h3><b>Application authentication/authorization mechanisms</b></h3>
      <a href="#application-authentication-authorization-mechanisms">
        
      </a>
    </div>
    <p>Web applications, which we'll focus on, generally use HTTP cookies to handle both authentication and authorization.</p><p><b>Authentication:</b></p><ol><li><p><b>Login</b>: When a user logs into a web application by entering their username and password, the application verifies these credentials against its database or in an <a href="https://www.cloudflare.com/learning/access-management/what-is-an-identity-provider/">Identity Provider (IdP)</a>. Additional forms of authentication may also be applied to achieve multiple factors of authentication. If they match, the server or external security service (e.g., Cloudflare Access) considers the user authenticated.</p></li><li><p><b>Cookie/Token Creation</b>: The server then creates a session for the user in the form of a cookie or JSON Web Token. The cookie is valid for a period of time until the user has to reauthenticate.</p></li><li><p><b>Sending and Storing Cookies</b>: The server sends a response back to the user's browser which includes the session ID and other identifying information about the user in the cookie. The browser then stores this cookie. This cookie is used to recognize the user in their subsequent requests.</p></li></ol><p><b>Authorization:</b></p><ol><li><p><b>Subsequent Requests</b>: For all subsequent requests to the web application, the user's browser automatically includes the cookie (with the session ID and other identifying information) in the request.</p></li><li><p><b>Server-side Verification</b>: The server gets the user data from the cookie and checks if the session is valid. If it's valid, the server also retrieves the user's details and their access permissions associated with that session ID.</p></li><li><p><b>Authorization Decision</b>: Based on the user's access permissions, the server decides whether the user is authorized to perform the requested operation or access the requested resource.</p></li></ol><p>This way, the user stays authenticated (and their authorization can be checked) for all subsequent requests after logging in, until the session expires, or they log out.</p><p>In modern web applications, this session state is most commonly stored in the form of a JSON Web Token (JWT).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5xt08CvRjE8ONSrRf3HK1x/7eae0b1f82f3cd0f1858834d660e53ae/image8.png" />
            
            </figure>
    <div>
      <h3><b>Debugging JWT based authentication</b></h3>
      <a href="#debugging-jwt-based-authentication">
        
      </a>
    </div>
    <p>JWTs are used in many modern web applications, and <a href="https://www.cloudflare.com/learning/access-management/what-is-ztna/">Zero Trust Network Access (ZTNA)</a> solutions like Cloudflare Access, for authentication and authorization. A JWT includes a payload that encodes information about the user and possibly other data, and it's signed by the server to prevent tampering. JWTs are often used in a stateless manner, meaning the server doesn't keep a copy of each JWT—it simply verifies and decodes them as they come in with requests. The stateless nature of JWTs means that you do not have to rely on a central system to handle user session management which avoids creating scalability issues as the number of users accessing a system increases.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5ffaX3vfbgBu5rBy0H1gmR/cfb1f19ef8e6b6a7077997e23cd44e9e/image2-2.png" />
            
            </figure><p>However, this stateless nature of JWTs makes debugging JWT-based authentication tricky without getting the specific JWT from a user. Here's why:</p><p><b>1. Token Specificity</b>: Each JWT is specific to a user and a session. It contains information (claims) about the user, the issuing authority, the token's issuing time, expiration time, and possibly other data. Therefore, to debug a problem, you often need the exact JWT that's causing the issue.</p><p><b>2. No Server-side Records</b>: Since JWTs are stateless, the server does not store sessions by default. It can't look up past tokens or their associated state, unless it's been specifically designed to log them, which is usually not the case due to privacy and data minimization considerations.</p><p><b>3. Transient Issues</b>: Problems with JWTs can be transient—they might relate to the specific moment the token was used. For instance, if a token was expired when a user tried to use it, you'd need that specific token to debug the issue.</p><p><b>4. Privacy and Security</b>: JWTs can contain sensitive information, so they should be handled with care. Getting a JWT from a user might expose their personal information or security credentials to whoever is debugging the issue. In addition, if a user sends their JWT through an insecure channel to a developer or an IT help desk, it could be intercepted (Cloudflare recently released a free <a href="/introducing-har-sanitizer-secure-har-sharing/">HAR Sanitizer</a> to help mitigate this concern).</p><p>These factors make it difficult to troubleshoot issues with JWT based authentication without having the specific token in question.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/31Aj21p1ndlk45ONfHg7n5/93425af409d59da01c5792f0a8b6b7d8/image3.png" />
            
            </figure>
    <div>
      <h3><b>A better way to debug identity issues</b></h3>
      <a href="#a-better-way-to-debug-identity-issues">
        
      </a>
    </div>
    <p>We set out to build a better way to debug issues related to a user’s identity in Cloudflare Zero Trust without sharing JWTs or HAR files back and forth. Administrators can now view a user’s Registry Identity (used for Gateway policies) and all active Access sessions.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3a2IX9mPpFTqXHu7LrPNxf/6d15f879803248a4929686cefc79882a/image7.png" />
            
            </figure><p>This session information includes the full identity evaluated by Zero Trust including IdP claims, device posture information, network context and more. We were able to build this feature without any additional load on Access’ authentication logic by leveraging Cloudflare Workers KV. At the time a user authenticates with Access, their associated identity is immediately saved into a Key/Value pair in Workers KV. This all occurs within the context of the user’s authentication event which means there is minimal latency impact or reliance on an external service.</p><p>This feature is available to all customers across all Zero Trust plans. If you would like to get started with Cloudflare Zero Trust, <a href="https://dash.cloudflare.com/sign-up/teams">sign up for a free account</a> for up to 50 users, today! Or, collaborate with Cloudflare experts to discuss <a href="https://www.cloudflare.com/learning/access-management/security-service-edge-sse/">SSE</a> or SASE for your organization and <a href="https://www.cloudflare.com/products/zero-trust/plans/enterprise/">tackle your Zero Trust use cases</a> one step at a time.</p> ]]></content:encoded>
            <category><![CDATA[SASE]]></category>
            <category><![CDATA[Cloudflare Zero Trust]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Cloudflare One]]></category>
            <category><![CDATA[Cloudflare Workers KV]]></category>
            <guid isPermaLink="false">7tg9mNqV9zSgFQ26BZ9d37</guid>
            <dc:creator>Kenny Johnson</dc:creator>
        </item>
        <item>
            <title><![CDATA[Random Employee Chats at Cloudflare]]></title>
            <link>https://blog.cloudflare.com/random-employee-chats-cloudflare/</link>
            <pubDate>Sat, 20 Mar 2021 12:00:00 GMT</pubDate>
            <description><![CDATA[ We developed the Random Employee Chats application internally, with the goal of recreating the pre-pandemic informal interactions. Here's how we moved from a shared spreadsheet to Cloudflare Workers to automate the entire process. ]]></description>
            <content:encoded><![CDATA[ <p>Due to the COVID-19 pandemic, most Cloudflare offices closed in March 2020, and employees began working from home. Having online meetings presented its own challenges, but preserving the benefits of casual encounters in physical offices was something we struggled with. Those informal interactions, like teams talking next to the coffee machine, help form the social glue that holds companies together.</p><p>In an attempt to recreate that experience, David Wragg, an engineer at Cloudflare, introduced “Random Engineer Chats” (We’re calling them “Random Employee Chats” here since this can be applied to any team). The idea is that participants are randomly paired, and the pairs then schedule a 30-minute video call. There’s no fixed agenda for these conversations, but the participants might learn what is going on in other teams, gain new perspectives on their own work by discussing it, or meet new people.</p><p>The first iteration of Random Employee Chats used a shared spreadsheet to coordinate the process. People would sign up by adding themselves to the spreadsheet, and once a week, David would randomly form pairs from the list and send out emails with the results. Then, each pair would schedule a call at their convenience. This process was the minimum viable implementation of the idea, but it meant that the process relied on a single person.</p>
    <div>
      <h3>Moving to Cloudflare Workers</h3>
      <a href="#moving-to-cloudflare-workers">
        
      </a>
    </div>
    <p>We wanted to automate these repetitive manual tasks, and naturally, we wanted to use <a href="https://workers.cloudflare.com/">Cloudflare Workers</a> to do it. This is a great example of a complete application that runs entirely in Cloudflare Workers on the edge with no backend or origin server.</p><p>The technical requirements included:</p><ul><li><p>A user interface so people can sign up</p></li><li><p>Storage to keep track of the participants</p></li><li><p>A program that automatically pairs participants and notifies each pair</p></li><li><p>A program that reminds people to register for the next sessions</p></li></ul><p>Workers met all of these requirements, and the resulting application runs in Cloudflare's edge network without any need to run code or store data on other platforms. The Workers script supplies the UI that returns static HTML and JavaScript assets, and for storage, Workers KV keeps track of people who signed in.</p><p>We also recently announced <a href="https://developers.cloudflare.com/workers/platform/cron-triggers">Workers Cron Triggers</a> which allow us to run a Cloudflare Workers script on a defined schedule. The Workers Cron Triggers are perfect for pairing people up before the sessions and reminding users to register for the next session.</p>
    <div>
      <h3>The User Interface</h3>
      <a href="#the-user-interface">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/26ih1wWQgNslI8kNjBYq1d/20676f1eef4c8052cc6ec2dda8c06eb7/Random-Engineer-Chat-Dashboard-1.png" />
            
            </figure><p>The interface is very simple. It shows the list of participants and allows users to register for the next session.</p><p>When a user clicks on the register button, it calls an API that adds a key in Workers KV:</p>
            <pre><code>key: register:ID
value: {"name":"Sven Sauleau","picture":"picture.jpg","email":"example@cloudflare.com"}</code></pre>
            <p>User information is stored in Workers KV and displayed in the interface to create the list of participants. The user information gets deleted during pairing so the list is ready for the next round of chats. We require weekly sign-ups from participants who want to participate in the chats to confirm their availability.</p><p>The code for the interface can be found <a href="https://github.com/cloudflare/random-employee-chat/tree/master/src/workers/randengchat/public">here</a> and the API is <a href="https://github.com/cloudflare/random-employee-chat/blob/master/src/workers/randengchat/server/index.js">here</a>.</p>
    <div>
      <h3>Forming the pairs</h3>
      <a href="#forming-the-pairs">
        
      </a>
    </div>
    <p>A Random Employee Chat is a one-on-one conversation, so at a set time, the application puts participants into pairs. Each Monday morning at 0800 UTC, a Workers cron job runs the pairing script which is deployed using <a href="https://developers.cloudflare.com/workers/cli-wrangler">Wrangler</a>.</p><p>Wrangler supports configuring the schedule for a job using the familiar cron notation. For instance, our wrangler.toml has:</p>
            <pre><code>name = "randengchat-cron-pair"
type = "webpack"
account_id = "..."
webpack_config = "webpack.config.js"
…

kv_namespaces = [...]

[triggers]
crons = ["0 8 * * 2"]</code></pre>
            <p>The pairing script is the most intricate part of the application, so let’s run through its code. First, we list the users that are currently registered. This is done using the <a href="https://developers.cloudflare.com/workers/runtime-apis/kv#listing-by-prefix">list</a> function in Workers KV extracting keys with the prefix <code>register:</code>.</p>
            <pre><code>const list = await KV_NAMESPACE.list({ prefix: "register:" });</code></pre>
            <p>If we don’t have an even number of participants, we remove one person from the list (David!).</p><p>Then, we create all possible pairs and attach a weight to them.</p>
            <pre><code>async function createWeightedPairs() {
  const pairs = [];
  for (let i = 0; i &lt; keys.length - 1; i++) {
    for (let j = i + 1; j &lt; keys.length; j++) {
      const weight = (await countTimesPaired(...)) * -1;
      pairs.push([i, j, weight]);
    }
  }
  return pairs;
}</code></pre>
            <p>For example, suppose four people have registered (Tom, Edie, Ivie and Ada), that’s 6 possible pairs (<a href="https://www.wolframalpha.com/input/?i=4+choose+2">4 choose 2</a>). We might end up with the following pairs and their associated weights:</p>
            <pre><code>(Tom, Edie, 1)
(Tom, Ivie, 0)
(Tom, Ada, 1)
(Edie, Ivie, 2)
(Edie, Ada, 0)
(Ivie, Ada, 2)</code></pre>
            <p>The weight is calculated using the number of times a pair matched in the past to avoid scheduling chats between people that already met. More sophisticated factors could be taken into account, such as the same office or timezone, when they last met, and etc.</p><p>We keep track of how many times the pair matched using a count kept in KV:</p>
            <pre><code>async function countTimesPaired(key) {
  const v = await DB.get(key, "json");
  if (v !== null &amp;&amp; v.count) {
    return v.count;
  }
  return 0;
}</code></pre>
            <p>The people form a complete graph with people as nodes and the edges weighted by the number of times the two people connected by the edge have met.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/anoEru6iUYXNlVFxpCk4w/67d7587b1440044c1a9072545dc8cce6/image5-20.png" />
            
            </figure><p>Next, we run a weighted matching algorithm, in our case the <a href="https://en.wikipedia.org/wiki/Blossom_algorithm">Blossom algorithm</a>, which will find a maximum matching on the graph (a set of edges that maximize the number of pairs of people connected with each person appearing exactly once). As we use the weighted form of the Blossom algorithm we also minimize the path weights. This has the effect of finding the optimal set of pairs minimizing the number of times people have met previously.</p><p>In the case above the algorithm suggests the optimal pairs are  (Tom, Ivie) and (Edie, Ada). In this case, those pairs have never met before.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Fi6gV8b7uZHZMVR7jouFF/0e7ef8ef4cdcd6dde01e4fb637bc964c/image2-18.png" />
            
            </figure><p>The pairs are recorded in Workers KV with their updated matching count to refine the weights at future sessions:</p>
            <pre><code>key: paired:ID
value: {"emails":["left@cloudflare.com","right@cloudflare.com", "count": 1]}</code></pre>
            <p>A notification is sent to each pair of users to notify them that they matched.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7xVSUWDwEFKqY2KS9Mupwj/0e7e502307c3a67fc6e7d124a08a5101/image4-17.png" />
            
            </figure><p>Once the pairing is done, all <code>register:</code> keys are deleted from KV.</p><p>All the pairing logic is <a href="https://github.com/cloudflare/random-employee-chat/blob/master/src/workers/cron-pair/index.js">here</a>.</p>
    <div>
      <h3>Reminders</h3>
      <a href="#reminders">
        
      </a>
    </div>
    <p>The application sends users a reminder to sign up every week. For the reminder, we use another Workers cron job that runs every Thursday at 1300 UTC. The schedule in Wrangler is</p>
            <pre><code>[triggers]
crons = ["0 13 * * 5"]</code></pre>
            <p>This script is much simpler than the pairing script. It simply sends a message to a room in our company messaging platform that notifies all members of the channel.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7ACIIWbP4COwxXbImfqiQE/78488aeabc86a3a1b6d382ffe32f42a2/image3-19.png" />
            
            </figure><p>All the reminder code is <a href="https://github.com/cloudflare/random-employee-chat/blob/master/src/workers/cron-reminder/index.js">here</a>.</p><p>We hope you find this code useful and that it inspires you to use Workers, Workers KV, Workers Unbound and Workers Cron Triggers to write large, real applications that run entirely without a backend server.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Cloudflare Workers KV]]></category>
            <category><![CDATA[Wrangler]]></category>
            <guid isPermaLink="false">3MzecqWNHhc0wbjnxE2kiX</guid>
            <dc:creator>Sven Sauleau</dc:creator>
            <dc:creator>David Wragg</dc:creator>
        </item>
        <item>
            <title><![CDATA[Workers KV - free to try, with increased limits!]]></title>
            <link>https://blog.cloudflare.com/workers-kv-free-tier/</link>
            <pubDate>Mon, 16 Nov 2020 20:00:00 GMT</pubDate>
            <description><![CDATA[ Announcing a Free Tier for Workers KV that opens up global, low-latency data storage to every developer on the Workers platform. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5mRIDbFJHlqdDBXtLj9LBg/6e75009c6b894f44795e904e64b12de3/Workers-KV-transparent.png" />
            
            </figure><p>In May 2019, we launched <a href="https://www.cloudflare.com/products/workers-kv/">Workers KV</a>, letting developers store key-value data and make that data globally accessible from Workers running in Cloudflare’s over 200 data centers.</p><p>Today, we’re announcing a Free Tier for Workers KV that opens up global, low-latency data storage to every developer on the Workers platform. Additionally, to expand Workers KV’s use cases even further, we’re also raising the maximum value size from 10 MB to 25 MB. You can now write an application that serves larger static files directly or JSON blobs directly from KV.</p><p>Together with our announcement of the <a href="/introducing-workers-durable-objects">Durable Objects limited beta last month</a>, the Workers platform continues to move toward providing storage solutions for applications that are globally deployed as easily as an application running in a single data center today.</p>
    <div>
      <h3>What are the new free tier limits?</h3>
      <a href="#what-are-the-new-free-tier-limits">
        
      </a>
    </div>
    <p>The free tier includes 100,000 read operations and 1,000 each of write, list and delete operations per day, resetting daily at UTC 00:00, with a maximum total storage size of 1 GB. Operations that exceed these limits will fail with an error.</p><p>Additional KV usage costs \$0.50 per million read operations, \$5.00 per million list, write and delete operations and \$0.50 per GB of stored data.</p><p>We intentionally chose these limits to prioritize use cases where KV works well - infrequently written data that may be frequently read around the globe.</p>
    <div>
      <h3>What is the new KV value size limit?</h3>
      <a href="#what-is-the-new-kv-value-size-limit">
        
      </a>
    </div>
    <p>We’re raising the value size limit in Workers KV from 10 MB to 25 MB. Users frequently store static assets in Workers KV to then be served by Workers code. To make it as easy as possible to deploy your entire site on Workers, we’re raising the value size limit to handle even larger assets.</p><p>Since Workers Sites <a href="https://www.cloudflare.com/developer-platform/solutions/hosting/">hosts your site</a> from Workers KV, the increased size limit also means Workers Sites assets can now be as large as 25 MB.</p>
    <div>
      <h3>How does Workers KV work?</h3>
      <a href="#how-does-workers-kv-work">
        
      </a>
    </div>
    <p>Workers KV stores key-value pairs and caches hot keys in Cloudflare’s data centers around the world. When a request hits a Worker that uses KV, it retrieves the KV pair from Cloudflare’s local cache with low latency if the pair has been accessed recently.</p><p>While some programs running on the Workers platform are stateless, it is often necessary to distribute files or configuration data to running Workers. Workers KV allows you to persist data and access it across multiple Workers calls.</p><p>For example, let’s say I wanted to serve a static text file from Cloudflare’s edge. I could provision my own <a href="https://www.cloudflare.com/learning/cloud/what-is-object-storage/">object storage</a>, host it on my own domain, and put that domain behind Cloudflare.</p><p>With Workers KV, however, that reduces down to a few simple steps. First, I bind my KV namespace to my Workers code with Wrangler.</p>
            <pre><code>wrangler kv:namespace create "BUCKET"</code></pre>
            <p>Then, in my wrangler.toml, I add my new namespace id to associate it with my Worker.</p>
            <pre><code>kv_namespaces = [
 {binding = “BUCKET", id = &lt;insert-id-here&gt;}
]</code></pre>
            <p>I can upload a new text file from the command line using Wrangler:</p>
            <pre><code>$ wrangler kv:key put --binding=BUCKET "my-file" value.txt --path</code></pre>
            <p>And then serve that file from my Workers script with low latency from any of Cloudflare’s points of presence around the globe!</p>
            <pre><code>addEventListener('fetch', event =&gt; {
  event.respondWith(handleEvent(event))
})

async function handleEvent(event) {
 let txt = await BUCKET.get("my-file")  
 return new Response(txt, {
    headers: {
      "content-type": "text/plain"
    }
  })
}
</code></pre>
            <p>Beyond file hosting, Workers users have built many other types of applications with Workers KV:</p><ul><li><p>Mass redirects - handle billions of HTTP redirects.</p></li><li><p>Access control rules - validate user requests to your API.</p></li><li><p>Translation keys - dynamically localize your web pages.</p></li><li><p>Configuration data - manage who can access your origin.</p></li></ul><p>While Workers KV provides low latency access across the globe, it may not return the most up-to-date data if updates to keys are happening more than once a minute or from multiple data centers simultaneously. For use cases that cannot tolerate stale data, Durable Objects is a better solution.</p>
    <div>
      <h3>Get started with Workers KV today, for free</h3>
      <a href="#get-started-with-workers-kv-today-for-free">
        
      </a>
    </div>
    <p>The free tier and increased limits are live now!</p><p>You can get started with Workers and Workers KV in the Cloudflare dash. To check out an example of how to use Workers KV, check out the <a href="https://developers.cloudflare.com/workers/tutorials/build-a-jamstack-app">tutorial</a> in the Workers documentation.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers KV]]></category>
            <guid isPermaLink="false">DjGTxZbrah7C5qA5sbRjW</guid>
            <dc:creator>Greg McKeon</dc:creator>
        </item>
        <item>
            <title><![CDATA[Migrating cdnjs to serverless with Workers KV]]></title>
            <link>https://blog.cloudflare.com/migrating-cdnjs-to-serverless-with-workers-kv/</link>
            <pubDate>Thu, 10 Sep 2020 11:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare powers cdnjs, an open-source project that delivers popular JavaScript libraries to over 11% of websites. Today, we are excited to announce its migration to a serverless infrastructure using Cloudflare Workers and its distributed key-value store Workers KV! ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare powers <a href="https://cdnjs.com/">cdnjs</a>, an open-source project that accelerates websites by delivering popular JavaScript libraries and resources via <a href="https://www.cloudflare.com/cdn/">Cloudflare’s network</a>. Since <a href="/an-update-on-cdnjs/">our major update in December</a>, we focused on remodelling cdnjs for scalability and resilience. Today, we are excited to announce how Cloudflare delivers cdnjs—a migration to a serverless infrastructure using <a href="https://developers.cloudflare.com/workers/">Cloudflare Workers</a> and its distributed key-value store <a href="https://developers.cloudflare.com/workers/reference/storage">Workers KV</a>!</p>
    <div>
      <h3>What is cdnjs?</h3>
      <a href="#what-is-cdnjs">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/K5YOmwNbwMpTE8ILtKovN/a242c00ce389b1e3365a3e8ffb062d01/imagesmall.png" />
            
            </figure><p>For those unfamiliar, cdnjs is an acronym describing a Content Delivery Network (CDN) for JavaScript (JS). A <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">CDN</a> simply refers to a geographically distributed network of servers that provide Internet content, whether it is memes, cat videos, or HTML pages. In our case, the CDN refers to Cloudflare’s ever <a href="/cloudflare-network-expands-to-more-than-100-countries/">expanding network</a> of over 200 globally distributed data centers.</p><p>And here’s why this is relevant to you: it makes page load times lightning-fast. Virtually every website you visit needs to fetch JS libraries in order to load, including this one. Let’s say you visit a Sydney-based website that contains a local file from jQuery, a popular library found in <a href="https://w3techs.com/technologies/details/js-jquery">76.2%</a> of websites. If you are located in New York, you may notice a delay, as it can easily exceed 300ms to fetch the file—not to mention the time it takes for the round trips involved with the <a href="/tls-1-3-overview-and-q-and-a/">TLS handshake</a>. However, if the website references jQuery using <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.5.1/jquery.min.js">cdnjs.cloudflare.com</a>, you can retrieve the file from the closest Cloudflare data center in Buffalo, reducing the latency to a blazing 20ms.</p><p>While cdnjs operates behind the scenes, it is used by <a href="https://w3techs.com/technologies/overview/content_delivery">over 11%</a> of websites, making the Internet a much faster and more reliable place. In July, cdnjs served almost <a href="https://github.com/cdnjs/cf-stats/blob/master/2020/cdnjs_July_2020.md">190 billion requests</a>—an enormous 3.46PB of data.</p>
    <div>
      <h3>Where are the files stored?</h3>
      <a href="#where-are-the-files-stored">
        
      </a>
    </div>
    <p><img src="http://staging.blog.mrk.cfdata.org/content/images/2020/09/image5-2.png" /></p><p>While cdnjs speeds up the Internet, it certainly isn’t magic!</p><p>Historically, a number of <a href="https://www.cloudflare.com/load-balancing/">load-balanced</a> machines at one of Cloudflare’s core data centers would periodically pull cdnjs files from a backing store, acting as the origin for cdnjs.cloudflare.com. When a new file is requested, it is <a href="https://www.cloudflare.com/learning/cdn/what-is-caching/">cached</a> by Cloudflare, allowing it to be fetched quickly from any of our data centers.</p><p>The backing store is a catalogue of JS, CSS, and other web libraries in the form of an open-source <a href="https://github.com/">GitHub</a> repository. What this means is that anyone—including you—can contribute to it, subject to review and other processes.</p><p>However, until recently, these existing operations were very labor intensive and fragile.</p><p>This blog post will explain why we changed the infrastructure behind cdnjs to make it faster, more reliable, and easier to maintain. First, we will discuss how the community used to contribute to cdnjs, outlining the pains and concerns of the old system. Then, we will explore the benefits of migrating to Workers KV. After, we will dive into the new architecture, as well as upgrades to the website and cdnjs API. Finally, we will review the history of cdnjs, and where it is headed in the future.</p>
    <div>
      <h3>If you think you know how to make a PR, think again</h3>
      <a href="#if-you-think-you-know-how-to-make-a-pr-think-again">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6Qz4kp8D2lbh2dSPTBQizG/1bcf42ec9c4b6f0553c38ce59666b707/image1-1.png" />
            
            </figure><p>For the non-technical reader, a pull request (PR) is a request to merge changes you’ve made to a repository. Traditionally, if you wanted to include your JavaScript library in cdnjs, you would first create a PR on GitHub to <a href="https://github.com/cdnjs/cdnjs">cdnjs/cdnjs</a> with a JSON file describing your package and additional files for any version you wished to include. Once your PR was approved by our <a href="https://github.com/PeterBot">old bot</a>, manually reviewed, and then merged by a maintainer, your package would be integrated with cdnjs.</p><p>Sounds easy, right? You can just fork the repo, clone it, and copy paste a few files, no?</p><p>Exactly. Contributing was easy if you had several hours to burn, a case-sensitive file system, and a couple hundred gigabytes of free disk space to <a href="https://git-scm.com/docs/git-clone">git clone</a> the 300GB repo. If you were short on time—no problem, you could always use your advanced knowledge of <a href="https://git-scm.com/docs/git-sparse-checkout">git sparse-checkout</a> to get the job done. Don’t know git? Just add one file at a time manually through GitHub’s UI.</p><p>I think you get the point. I know I certainly did when I naively spent 10 hours cloning the repo, only to discover that macOS is case-insensitive by default.</p><p>However, updating cdnjs was not only difficult for the contributors, but also the maintainers. Historically, the community was able to contribute version files directly, which could potentially be malicious. This created lots of work for maintainers, requiring them to inspect each file manually, <a href="https://man7.org/linux/man-pages/man1/diff.1.html">diffing</a> files against the official library source and running malware checks.So how did packages update once they were in cdnjs? In the JSON file describing each package, there was an optional auto-update definition telling the bot where to look for new versions of the library. If present, when your package released a new version from npm or GitHub, the bot would download it, pushing the files to <a href="https://github.com/cdnjs/cdnjs">cdnjs/cdnjs</a> and computed <a href="https://developer.mozilla.org/en-US/docs/Web/Security/Subresource_Integrity">Subresource Integrity</a> (SRI) hashes to <a href="https://github.com/cdnjs/SRIs">cdnjs/SRIs</a>. If the auto-update property was missing, it would be your responsibility to make manual PRs to update cdnjs with any future versions.</p>
    <div>
      <h3>A wake-up call for cdnjs</h3>
      <a href="#a-wake-up-call-for-cdnjs">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Bh8OS2VHZy5FKAySbhQns/84626f199211f856b3f230fcb109c161/image6-1-4.png" />
            
            </figure><p>In April, during maintenance at one of our core data centers, a technician accidentally disconnected the cables supplying all external connections to our other data centers, causing the data center to go offline for approximately four hours. <a href="/cloudflare-dashboard-and-api-outage-on-april-15-2020/">This incident</a> served as the first wake-up call for cdnjs, especially since the affected data center housed the primary cdnjs origin web servers. In this case, we did have a backup running on an external provider, but what really saved us was Cloudflare’s global cache, which minimized the impact of the outage as only uncached assets failed to load.</p><p>We started to think about how we can improve both the reliability and performance of how we serve cdnjs. We went straight to <a href="https://workers.cloudflare.com/">Cloudflare Workers</a>, our own platform for developing on the <a href="https://www.cloudflare.com/learning/cdn/glossary/edge-server/">edge</a>. One powerful tool built into Workers is <a href="https://developers.cloudflare.com/workers/reference/storage">Workers KV</a>—a low-latency, globally distributed key-value store optimized for high-read applications.</p><p>We put two and two together, realizing that instead of pulling the <a href="https://github.com/cdnjs/cdnjs">cdnjs/cdnjs</a> repository and serving files from disk, we could cut the physical machines out entirely, distributing the data around the world and serving files straight from the edge. That way, cdnjs would be able to recover from any origin data center failure, while also increasing its scalability.</p>
    <div>
      <h3>Workers KV to the rescue</h3>
      <a href="#workers-kv-to-the-rescue">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6dAu3V0R16Ugbf2Ad0nh4f/3ce544c6aa6c9558fdfbe686ff5ef640/image3-3.png" />
            
            </figure><p>At first glance, the decision to use Workers KV was a no-brainer. Since files in cdnjs never change but require frequent reads, Workers KV was a perfect fit.</p><p>However, as we planned our migration, we became concerned that with over 7 million assets in cdnjs, there would undoubtedly exist files that exceed <a href="https://developers.cloudflare.com/workers/about/limits/">Workers KV’s 10MiB value limit</a>. After investigating, we discovered that several hundred cdnjs files were oversized, the majority being <a href="https://developer.mozilla.org/en-US/docs/Tools/Debugger/How_to/Use_a_source_map">JavaScript Source Maps</a>.</p><p>Then the idea hit us. We could store compressed versions of cdnjs files in Workers KV, not only solving our oversized file issue, but also optimizing how we serve files.</p><p>If you pay the Internet bill, you’ll know that <a href="/the-relative-cost-of-bandwidth-around-the-world/">bandwidth is expensive</a>! For this reason, all modern browsers will try to <a href="/efficiently-compressing-dynamically-generated-53805/">fetch compressed web content</a> whenever it is available. Similarly, within Cloudflare we often <a href="/results-experimenting-brotli/">experiment with on-the-fly compression</a> to reduce our bandwidth, always serving compressed content to the eyeball when it is accepted. As a result, we decided to compress all cdnjs files ahead of time, writing them to Workers KV with both optimal <a href="https://github.com/google/brotli">Brotli</a> and <a href="https://www.gzip.org/">gzip</a> forms. That way, we could increase the compression level compared to on-the-fly compression as we no longer have the latency requirements.</p><p>This means we now serve cdnjs files faster and smaller!</p>
    <div>
      <h3>A complete makeover for cdnjs</h3>
      <a href="#a-complete-makeover-for-cdnjs">
        
      </a>
    </div>
    <p><a href="http://staging.blog.mrk.cfdata.org/content/images/2020/09/image7-1.png"><img src="http://staging.blog.mrk.cfdata.org/content/images/2020/09/image7-1.png" /></a></p><p>Today, if you want to include your JavaScript library in cdnjs, you first create a PR on GitHub to our new repository <a href="https://github.com/cdnjs/packages">cdnjs/packages</a>. The repo is easily cloneable at 50MB and consists of thousands of JSON files, each describing a cdnjs package and how it is auto-updated from npm or git. Once your file is validated by our automated CI—powered by a <a href="https://github.com/cdnjs/tools">new bot</a>—and merged by a maintainer, your package would be automatically enrolled in our auto-update service.</p><p>In the new system, security and maintainability are prioritized. For starters, cdnjs version files are created by our bot, minimizing the possibility of human error when merging a new version. While the JSON files in <a href="https://github.com/cdnjs/packages">cdnjs/packages</a> are added by error-prone humans, they are inspected by our bot before being approved by a maintainer. Each file is automatically validated against a <a href="https://github.com/cdnjs/tools/blob/master/schema_human.json">JSON schema</a>, as well as checked for popularity on npm or GitHub.</p><p>When the bot discovers a new release, it pushes Brotli and gzip-compressed versions of the files to a files namespace in Workers KV. With each entry, the bot writes some <a href="/catching-up-with-workers-kv/">metadata in Workers KV</a> for the <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag">ETag</a> and <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Last-Modified">Last-Modified</a> HTTP headers. Similar to before, the bot also computes <a href="https://developer.mozilla.org/en-US/docs/Web/Security/Subresource_Integrity">Subresource Integrity</a> (SRI) hashes of the uncompressed files, but now pushes them instead to a SRIs namespace in Workers KV.</p><p>Then, when a new file is requested from cdnjs.cloudflare.com, a <a href="https://developers.cloudflare.com/workers/">Cloudflare Worker</a> will inspect the client’s <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept-Encoding">Accept-Encoding</a> header, fetching either the Brotli or gzip-compressed version with its ETag and Last-Modified metadata from Workers KV. As the compressed file travels back through Cloudflare, it is cached for future requests and uncompressed on-the-fly if needed.</p><p>At the moment, there are still a handful of files exceeding Workers KV’s size limit. Consequently, if the Cloudflare Worker fails to retrieve a file from Workers KV, it is fetched from the origin backed by the original git repo. In the coming months, we plan on gradually removing this infrastructure.</p>
    <div>
      <h3>Scaling the website and API</h3>
      <a href="#scaling-the-website-and-api">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2q1lBMUDzWZUSCs9rMRTIw/a0b9baf89cd1056752a84566fe31dda8/image2.gif" />
            
            </figure><p>Besides the core cdnjs infrastructure, many of its other components received upgrades as well!</p><p>On the cdnjs project’s <a href="https://cdnjs.com/">homepage</a>, you will be greeted by a slick <a href="https://github.com/cdnjs/static-website">new beta website</a> built by <a href="https://github.com/mattipv4">Matt</a>. Constructed with <a href="https://vuejs.org/">Vue</a> and <a href="https://nuxtjs.org/">Nuxt</a>, the beta website is powered entirely by the <a href="https://cdnjs.com/api">cdnjs API</a>. As a result, it is always up-to-date with the latest package information and requires low resource usage to serve the site—which runs completely on the client-side after the first page load—helping us scale with cdnjs’s never-ending growth.</p><p>In fact, the cdnjs API also strengthened its scalability, benefitting from a serverless architecture close to the one we have seen with cdnjs and Workers KV.</p><p>Before migrating to Workers KV, the cdnjs API relied on a regularly scheduled process that involved generating about 300MB of metadata. The cdnjs API’s backend would then fetch this enormous “package.min.js” file into memory and use it to operate the API. Similarly, file SRIs were pushed to <a href="https://github.com/cdnjs/SRIs">cdnjs/SRIs</a>, which was cloned by the API locally to serve SRI responses.</p><p>After all cdnjs files (within the permitted size limit) were moved to Workers KV, these legacy processes became unsustainable, requiring millions of reads and an unreasonable amount of time. Therefore, we decided to upload all metadata found into Workers KV. We split the metadata into four namespaces—one for package-level metadata, one for version-specific metadata, one containing aggregated metadata, and one for file SRIs.</p><p>Similar to cdnjs’s serverless design, a Cloudflare Worker sits on top of <a href="http://metadata.speedcdnjs.com/packages">metadata.speedcdnjs.com</a>, serving data from Workers KV using several public endpoints. Currently, the cdnjs API is fully integrated with these endpoints, which provide an elegant solution as cdnjs continues to scale.</p>
    <div>
      <h3>Transparency and the future of cdnjs</h3>
      <a href="#transparency-and-the-future-of-cdnjs">
        
      </a>
    </div>
    <p>Since its birth in January 2011, cdnjs has always been deeply rooted in transparency, deriving its strength from the community. Even when cdnjs exploded in size and its founders Ryan Kirkman and Thomas Davis <a href="/cdnjs-community-moderated-javascript-librarie/">teamed up with us in June 2011</a>, the project remained entirely open-source on <a href="https://github.com/cdnjs/">GitHub</a>.</p><p>As the years passed, it became harder for the founders to stay active, heavily depending on the community for support. With a nearly nonexistent budget and little access to the repository, core cdnjs maintainers were challenged every day to keep the project alive.</p><p>Last year, this led us to contact the founders, <a href="https://news.ycombinator.com/item?id=21416614">who were happy to have our assistance with the project</a>. With Cloudflare’s increased role, cdnjs is as stable as ever, with <a href="https://cdnjs.com/about">active members</a> from both Cloudflare and the community.</p><p>However, as we remove our reliance on the legacy system and store files in Workers KV, there are concerns that cdnjs will become proprietary. Don’t worry, we are working hard to ensure that cdnjs remains as transparent and open-source as possible. To help the community audit updates to Workers KV, there is a new repository, <a href="https://github.com/cdnjs/logs">cdnjs/logs</a>, which is used by the bot to log all Workers KV-related events. Furthermore, anyone can validate the integrity of cdnjs files by fetching SRIs from the cdnjs API.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Overall, this past year has been a turbulent time for cdnjs, but all of its shortcomings have acted as red flags to help us build a better system. Most recently, we have mitigated the risks of depending on physical machines at a single location, migrating cdnjs to a serverless infrastructure where its files are stored in <a href="https://developers.cloudflare.com/workers/reference/storage">Workers KV</a>.</p><p>Today, cdnjs is in good hands, and is not going away anytime soon. Shout out especially to the maintainers <a href="https://github.com/xtuc">Sven</a> and <a href="https://github.com/mattipv4">Matt</a> for creating tons of momentum with the project, working on everything from scaling cdnjs to editing this post.</p><p>Moving forward, we are committed to making cdnjs as transparent as possible. As we continue to improve cdnjs, we will release more blog posts to keep the community up to date. If you are interested, please subscribe to our blog. After all, it is the community that makes cdnjs possible! A special thanks to our active GitHub contributors and members of the <a href="https://github.com/cdnjs/cdnjs/discussions/">cdnjs Community Forum</a> for sticking with us!</p> ]]></content:encoded>
            <category><![CDATA[CDNJS]]></category>
            <category><![CDATA[Cloudflare Workers KV]]></category>
            <category><![CDATA[Open Source]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Serverless]]></category>
            <guid isPermaLink="false">2YoV0y94C9jANAkQJ2V0TA</guid>
            <dc:creator>Tyler Caslin</dc:creator>
        </item>
        <item>
            <title><![CDATA[Catching up with Workers KV]]></title>
            <link>https://blog.cloudflare.com/catching-up-with-workers-kv/</link>
            <pubDate>Mon, 29 Jun 2020 11:00:00 GMT</pubDate>
            <description><![CDATA[ Today, we’d like to share with you some of the stuff that has recently shipped in Workers KV: a new feature and an internal change that should significantly improve latency in some cases. Let’s dig in! ]]></description>
            <content:encoded><![CDATA[ <p></p><p>The Workers Distributed Data team has been hard at work since <a href="/whats-new-with-workers-kv/">we gave you an update last November</a>. Today, we’d like to share with you some of the stuff that has recently shipped in Workers KV: a new feature and an internal change that should significantly improve latency in some cases. Let’s dig in!</p>
    <div>
      <h3>KV Metadata</h3>
      <a href="#kv-metadata">
        
      </a>
    </div>
    <p>Workers KV has a fairly straightforward interface: you can put keys and values into KV, and then fetch the value back out by key:</p>
            <pre><code>await contents.put(“index.html”, someHtmlContent);
await contents.put(“index.css”, someCssContent);
await contents.put(“index.js”, someJsContent);

// later

let index = await contents.get(“index.html”);</code></pre>
            <p>Pretty straightforward. But as you can see from this example, you may store different kinds of content in KV, even if the type is identical. All of the values are strings, but one is HTML, one is CSS, and one is JavaScript. If we were going to serve this content to users, we would have to construct a response. And when we do, we have to let the client know what the content type of that request is: text/html for HTML, text/css for CSS, and text/javascript for JavaScript. If we serve the incorrect content type to our clients, they won’t display the pages correctly.</p><p>One possible solution to this problem is using the <a href="https://www.npmjs.com/package/mime">mime package from npm</a>. This lets us write code that looks like this:</p>
            <pre><code>// pathKey is a variable with a value like “index.html”
const mimeType = mime.getType(pathKey) || ‘text/plain’</code></pre>
            <p>Nice and easy. But there are some drawbacks. First of all, because we have to detect the content type at runtime which means we’re figuring this out on every request. It would be nicer to figure it out only once instead. Second, if we look at how the package implements getType, it does this by <a href="https://github.com/broofa/mime/blob/d97bfaeabf8b5ff0124692244f921836ea405c41/types/standard.js">including an array of possible extensions and their types</a>. This means that this array is included in our worker, taking up 9kb of space. That’s also less than ideal.</p><p>But now, we have a better way. Workers KV will now allow you to add some extra JSON to each key/value pair, to use however you’d like. So we could start inserting the contents of those files like this, instead:</p>
            <pre><code>await contents.put(“index.html”, someHtmlContent, {“Content-Type”: “text/html”});
await contents.put(“index.css”, someCssContent, {“Content-Type”: “text/css”});
await contents.put(“index.js”, someJsContent, {“Content-Type”: “text/javascript”});</code></pre>
            <p>You could determine these content types in various ways: by looking at the file extension like the mime package, or by using a library that inspects the file’s contents to figure out its type like libmagic. Regardless, the type would be stored in KV alongside the contents of the file. This way, there’s no need to recompute the type on every request. Additionally, the detection code would live in your uploading tool, not in your worker, creating a smaller bundle. Win-win!</p><p>The worker code would pass along this metadata by using a new method:</p>
            <pre><code>let {value, metadata} = await contents.getWithMetadata(“index.js”);</code></pre>
            <p>Here, <code>value</code> would have the contents, like before. But <code>metadata</code> contains the JSON of the metadata that was stored: <code>metadata[“Content-Type”]</code>would return <code>“text/javascript”</code>. You’ll also see this metadata come back when you make a list request as well.</p><p>Given that you can store arbitrary JSON, it’s useful for more than just content types: we’ve had folks <a href="https://community.cloudflare.com/t/etag-and-content-type-support-in-kv-storage/150331?u=sklabnik">post to the forums asking about etags</a>, for example. We’re excited to see what you do with this new capability!</p>
    <div>
      <h3>Significantly faster writes</h3>
      <a href="#significantly-faster-writes">
        
      </a>
    </div>
    <p>Our documentation states:</p><p><i>Very infrequently read values are stored centrally, while more popular values are maintained in all of our data centers around the world.</i></p><p>This is why Workers KV is optimized for higher read volumes than writes. We distribute popular data across the globe, close to users wherever they are. However, for infrequently accessed data, we store the data in a central location until access is requested. Each write (and delete) must go back to the central data store, as do reads on less popular values. The central store was located in the United States, and so the speed for writes would be variable. In the US, it would be much faster than say, in Europe or Asia.</p><p>Recently, we have rolled out a major internal change. We have added a second source of truth on the European continent. These two sources of truth will still coordinate between themselves, ensuring that any data you write or update will be available in both places as soon as possible. But latencies from Europe, as well as places closer to Europe than the United States, should be much faster, as they do not have to go the full way to the US.</p><p>How much faster? Well, it will depend on your workload. Several other Cloudflare products use Workers KV, and here’s a graph of response times from one of them:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4rWSdJkZyGgRnuRQepJzmE/bcb9ef29ac2bfb276f4592c6638dc143/image2-10.png" />
            
            </figure><p>As you can see, there’s a sharp drop in the graph when the switchover happened.</p><p>We can also measure this time across all customers:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6bZoPcQ3CbtsahcNKvTExa/d5751591954fe86fd1416b9853f04bff/image1-11.png" />
            
            </figure><p>The long tail has been significantly shortened. (We’ve redacted the exact numbers, but you can still see the magnitude of the changes.)</p>
    <div>
      <h3>More to come</h3>
      <a href="#more-to-come">
        
      </a>
    </div>
    <p>The distributed data team has been working on some additional things, but we’re not quite ready to share them with you yet! We hope that you’ll find these changes make Workers KV even better for you, and we’ll be sharing more updates on the blog as we ship.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers KV]]></category>
            <guid isPermaLink="false">7ee4ME9b9t45i48XMjKejB</guid>
            <dc:creator>Steve Klabnik</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Secrets and Environment  Variables to Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/workers-secrets-environment/</link>
            <pubDate>Wed, 26 Feb 2020 15:00:00 GMT</pubDate>
            <description><![CDATA[ The Workers team here at Cloudflare has been hard at work shipping a bunch of new features in the last year and we’ve seen some amazing things built with the tools we’ve provided. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>The Workers team here at Cloudflare has been hard at work shipping a bunch of new features in the last year and we’ve seen some <a href="https://workers.cloudflare.com/built-with">amazing things</a> built with the tools we’ve provided. However, as my uncle once said, with great serverless platform growth comes great responsibility.</p><p>One of the ways we can help is by ensuring that deploying and maintaining your Workers scripts is a low risk endeavor. Rotating a set of API keys shouldn’t require risking downtime through code edits and redeployments and in some cases it may not make sense for the developer writing the script to know the actual API key value at all. To help tackle this problem, we’re releasing Secrets and Environment Variables to the Wrangler CLI and Workers Dashboard.</p>
    <div>
      <h2>Supporting secrets</h2>
      <a href="#supporting-secrets">
        
      </a>
    </div>
    <p>As we started to design support for secrets in Workers we had a sense that this was already a big concern for a lot of our users but we wanted to learn about all of the use cases to ensure we were building the right thing. We headed to the community forums, twitter, and the inbox of Louis Grace, business development representative extraordinaire, for some anecdotes about Secrets usage. We also sent out a survey to our existing users to learn about use cases and pain points.</p><p>We learned that even though there was already a way to store secrets without exposing them via Workers KV, the solution was not very intuitive, nor did it meet all the needs of our users. Many users didn’t even know we had an interim solution in place. Recognizing that we were not the first platform to encounter this problem, we surveyed the existing landscape of Platform as a Service offerings to get a better sense for what our users would expect of us.</p>
    <div>
      <h2>Deciding on a solution</h2>
      <a href="#deciding-on-a-solution">
        
      </a>
    </div>
    <p>One of the first things we found was that not all environment variables are created equal. While the simplest use case for having a defined environment variable may be storing a piece of text that can be updated no matter where it is referenced in a script, sometimes those variables may have higher stakes associated with them. If you’re storing an API key that controls access to an important system, you may not want to allow anyone with dashboard access to see it, maybe not even the developers themselves.</p><p>With this in mind, we had to ensure the feature covered two different use cases: one for storing variables in plain text where you could see the variable being referenced and make edits to it and another where the variable would be encrypted as soon as you save it, never to be seen again. This way, we were able to serve both needs of our users, side by side, without one compromising for the other.</p>
    <div>
      <h2>Testing our prototypes</h2>
      <a href="#testing-our-prototypes">
        
      </a>
    </div>
    <p>Once we had a fairly good idea of what we wanted to build, we built some prototypes and rough implementations in staging environments so we would be able to perform some usability testing. We wrangled up some developers and observed them as they performed a series of tasks where they were asked to add some secrets and plain-text environment variables, reference them in one of their Workers, and bind their Worker to a Worker KV namespace.</p><p>Along the way we also asked questions to understand the developer’s professional background, familiarity with the product, and the use cases they’ve had for using Workers in the past along with any pain points they experienced.</p><p>While we were testing the new dashboard interface we also began testing the usability of the Wrangler CLI. We had Wrangler users perform the same tasks as the Workers dashboard users to help us find out if users are expecting different things out of their command-line tooling.</p>
    <div>
      <h2>Findings and fixes</h2>
      <a href="#findings-and-fixes">
        
      </a>
    </div>
    <p>Through our testing we were able to make a number of changes before the final release. Some of the smaller changes included things like adjusting the behavior of form fields to ensure users knew which variable would be associated with each value. We also made larger changes like electing to separate the KV namespace bindings from the other environment variables as a way to emphasize that KV namespace bindings are not the keys and values themselves but a reference to a namespace where those keys are stored.</p><p>Cina, one of our engineers, put together a proposal to align some of our terminology with the terms that our developers were naturally using to describe their workflow. In Wrangler users were accustomed to referencing their KV namespaces by adding a KV namespace binding so when they came to the Workers dashboard interface and saw a field called “KV Variables” they were often confused, thinking they were adding keys and values to the namespace itself instead of establishing a variable that could be used to reference the namespace. As a fix, we decided to call it a “KV namespace binding” throughout the experience.</p>
    <div>
      <h2>Try it out</h2>
      <a href="#try-it-out">
        
      </a>
    </div>
    <p>Environment variables are available now with the <a href="https://developers.cloudflare.com/workers/quickstart/">Wrangler CLI</a> and in the <a href="https://dash.cloudflare.com/">Workers Dashboard</a> so go ahead and give them a shot today!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/j8zJOFaoQFe4nIbzaSFmG/473957f2203af94b4d736697e9587db1/wrangler-add-secret-1.svg" />
            
            </figure><p>Adding a secret with Wrangler</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4552by5LG5KpgFbMJPaLG8/f9b74b091a3050500cc42868c3002dea/worker-detail-page.jpg" />
            
            </figure><p>Managing environment variables and KV bindings in the Workers Dashboard</p><p>As we continue to build out the Workers platform we’d love to hear from you. Let us know if you’re interested in <a href="https://docs.google.com/forms/d/e/1FAIpQLSd76N9gZFo_hxhHPHtTtwSkMAC7rfYD1TU6CPmLe2iKKlKWLA/viewform">participating in user research</a> or just <a href="#">have something to say</a> as we’d love to hear from you.</p> ]]></content:encoded>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[API]]></category>
            <category><![CDATA[Cloudflare Workers KV]]></category>
            <category><![CDATA[Design]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">38BlNuhadpp4EF1IJj5Xc7</guid>
            <dc:creator>John Donmoyer</dc:creator>
            <dc:creator>Nena</dc:creator>
        </item>
        <item>
            <title><![CDATA[The Serverlist: Full Stack Serverless, Serverless Architecture Reference Guides, and more]]></title>
            <link>https://blog.cloudflare.com/serverlist-10th-edition/</link>
            <pubDate>Mon, 02 Dec 2019 20:29:05 GMT</pubDate>
            <description><![CDATA[ Check out our tenth edition of The Serverlist below. Get the latest scoop on the serverless space, get your hands dirty with new developer tutorials, engage in conversations with other serverless developers, and find upcoming meetups and conferences to attend. ]]></description>
            <content:encoded><![CDATA[ <p>Check out our tenth edition of The Serverlist below. Get the latest scoop on the serverless space, get your hands dirty with new developer tutorials, engage in conversations with other serverless developers, and find upcoming meetups and conferences to attend.</p><p>Sign up below to have The Serverlist sent directly to your mailbox.</p>

 ]]></content:encoded>
            <category><![CDATA[The Serverlist Newsletter]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Cloudflare Workers KV]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">4X1in1Jol4OSq3WBZs7cK7</guid>
            <dc:creator>Connor Peshek</dc:creator>
        </item>
        <item>
            <title><![CDATA[What’s new with Workers KV?]]></title>
            <link>https://blog.cloudflare.com/whats-new-with-workers-kv/</link>
            <pubDate>Wed, 06 Nov 2019 14:00:00 GMT</pubDate>
            <description><![CDATA[ The Storage team has shipped some new features for Workers KV that folks have been asking for. In this post, we'll talk about some of these new features and how to use them. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2tL9Ez4EXvfYdtkiwRnZDv/d0f51bcde409896742bdf48cfa55d02c/workers-KV-dark-back_2x.png" />
            
            </figure><p>The Storage team here at Cloudflare shipped Workers KV, our global, low-latency, key-value store, <a href="/workers-kv-is-ga/">earlier this year</a>. As people have started using it, we’ve gotten some feature requests, and have shipped some new features in response! In this post, we’ll talk about some of these use cases and how these new features enable them.</p>
    <div>
      <h2>New KV APIs</h2>
      <a href="#new-kv-apis">
        
      </a>
    </div>
    <p>We’ve shipped some new APIs, both via <code>api.cloudflare.com</code>, as well as inside of a Worker. The first one provides the ability to upload and delete more than one key/value pair at once. Given that Workers KV is great for read-heavy, write-light workloads, a common pattern when getting started with KV is to write a bunch of data via the API, and then read that data from within a Worker. You can now do these bulk uploads without needing a separate API call for every key/value pair. This feature is available via <code>api.cloudflare.com</code>, but is not yet available from within a Worker.</p><p>For example, say we’re using KV to redirect legacy URLs to their new homes. We have a list of URLs to redirect, and where they should redirect to. We can turn this list into JSON that looks like this:</p>
            <pre><code>[
  {
    "key": "/old/post/1",
    "value": "/new-post-slug-1"
  },
  {
    "key": "/old/post/2",
    "value": "/new-post-slug-2"
  }
]</code></pre>
            <p>And then POST this JSON to the new bulk endpoint, <code>/storage/kv/namespaces/:namespace_id/bulk</code>. This will add both key/value pairs to our namespace.</p><p>Likewise, if we wanted to drop support for these redirects, we could issue a DELETE that has this body:</p>
            <pre><code>[
    "/old/post/1",
    "/old/post/2"
]</code></pre>
            <p>to <code>/storage/kv/namespaces/:namespace_id/bulk</code>, and we’d delete both key/value pairs in a single call to the API.</p><p>The bulk upload API has one more trick up its sleeve: not all data is a string. For example, you may have an image as a value, which is just a bag of bytes. if you need to write some binary data, you’ll have to base64 the value’s contents so that it’s valid JSON. You’ll also need to set one more key:</p>
            <pre><code>[
  {
    "key": "profile-picture",
    "value": "aGVsbG8gd29ybGQ=",
    "base64": true
  }
]</code></pre>
            <p>Workers KV will decode the value from base64, and then store the resulting bytes.</p><p>Beyond bulk upload and delete, we’ve also given you the ability to list all of the keys you’ve stored in any of your namespaces, from both the API and within a Worker. For example, if you wrote a blog powered by Workers + Workers KV, you might have each blog post stored as a key/value pair in a namespace called “contents”. Most blogs have some sort of “index” page that lists all of the posts that you can read. To create this page, we need to get a listing of all of the keys, since each key corresponds to a given post. We could do this from within a Worker by calling <code>list()</code> on our namespace binding:</p>
            <pre><code>const value = await contents.list()</code></pre>
            <p>But what we get back isn’t only a list of keys. The object looks like this:</p>
            <pre><code>{
  keys: [
    { name: "Title 1” },
    { name: "Title 2” }
  ],
  list_complete: false,
  cursor: "6Ck1la0VxJ0djhidm1MdX2FyD"
}</code></pre>
            <p>We’ll talk about this “cursor” stuff in a second, but if we wanted to get the list of titles, we’d have to iterate over the keys property, and pull out the names:</p>
            <pre><code>const keyNames = value.keys.map(e =&gt; e.name)</code></pre>
            <p><code>keyNames</code> would be an array of strings:</p>
            <pre><code>[“Title 1”, “Title 2”, “Title 3”, “Title 4”, “Title 5”]</code></pre>
            <p>We could take <code>keyNames</code> and those titles to build our page.</p><p>So what’s up with the <code>list_complete</code> and <code>cursor</code> properties? Well, imagine that we’ve been a <i>very</i> prolific blogger, and we’ve now written thousands of posts. The list API is paginated, meaning that it will only return the first thousand keys. To see if there are more pages available, you can check the <code>list_complete</code> property. If it is false, you can use the cursor to fetch another page of results. The value of <code>cursor</code> is an opaque token that you pass to another call to list:</p>
            <pre><code>const value = await NAMESPACE.list()
const cursor = value.cursor
const next_value = await NAMESPACE.list({"cursor": cursor})</code></pre>
            <p>This will give us another page of results, and we can repeat this process until <code>list_complete</code> is true.</p><p>Listing keys has one more trick up its sleeve: you can also return only keys that have a certain prefix. Imagine we want to have a list of posts, but only the posts that were made in October of 2019. While Workers KV is only a key/value store, we can use the prefix functionality to do interesting things by filtering the list. In our original implementation, we had stored the titles of keys only:</p><ul><li><p><code>Title 1</code></p></li><li><p><code>Title 2</code></p></li></ul><p>We could change this to include the date in YYYY-MM-DD format, with a colon separating the two:</p><ul><li><p><code>2019-09-01:Title 1</code></p></li><li><p><code>2019-10-15:Title 2</code></p></li></ul><p>We can now ask for a list of all posts made in 2019:</p>
            <pre><code>const value = await NAMESPACE.list({"prefix": "2019"})</code></pre>
            <p>Or a list of all posts made in October of 2019:</p>
            <pre><code>const value = await NAMESPACE.list({"prefix": "2019-10"})</code></pre>
            <p>These calls will only return keys with the given prefix, which in our case, corresponds to a date. This technique can let you group keys together in interesting ways. We’re looking forward to seeing what you all do with this new functionality!</p>
    <div>
      <h2>Relaxing limits</h2>
      <a href="#relaxing-limits">
        
      </a>
    </div>
    <p>For various reasons, there are a few hard limits with what you can do with Workers KV. We’ve decided to raise some of these limits, which expands what you can do.</p><p>The first is the limit of the number of namespaces any account could have. This was previously set at 20, but some of you have made a lot of namespaces! We’ve decided to relax this limit to 100 instead. This means you can create five times the number of namespaces you previously could.</p><p>Additionally, we had a two megabyte maximum size for values. We’ve increased the limit for values to ten megabytes. With the release of Workers Sites, folks are keeping things like images inside of Workers KV, and two megabytes felt a bit cramped. While Workers KV is not a great fit for truly large values, ten megabytes gives you the ability to store larger images easily. As an example, a 4k monitor has a native resolution of 4096 x 2160 pixels. If we had an image at this resolution as a lossless PNG, for example, it would be just over five megabytes in size.</p>
    <div>
      <h2>KV browser</h2>
      <a href="#kv-browser">
        
      </a>
    </div>
    <p>Finally, you may have noticed that there’s now a KV browser in the dashboard! Needing to type out a cURL command just to see what’s in your namespace was a real pain, and so we’ve given you the ability to check out the contents of your namespaces right on the web. When you look at a namespace, you’ll also see a table of keys and values:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7dGqVgzdqSPhXX8dEygRH7/7f38220fa78c724c91508283f934c8de/image-1.png" />
            
            </figure><p>The browser has grown with a bunch of useful features since it initially shipped. You can not only see your keys and values, but also add new ones:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5tonxPblY56QeSIygD7Wzl/13a9037caf6c4f29ba4dd2a670212f96/image-2.png" />
            
            </figure><p>edit existing ones:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5wRvxC9VHpAp1YE5O8GRdM/68a77b1b50788ff8cdcc7f9039b46055/image-3.png" />
            
            </figure><p>...and even upload files!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/36H4hNRLwoEZLL0DoHbU90/b0db8cfe05f079e325cb2b1fb173c57e/image-4.png" />
            
            </figure><p>You can also download them:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/G1BgZQxeG8LoG9zuvANok/2c4306fb1ae2afae80e632588cb829ae/image-5.png" />
            
            </figure><p>As we ship new features in Workers KV, we’ll be expanding the browser to include them too.</p>
    <div>
      <h2>Wrangler integration</h2>
      <a href="#wrangler-integration">
        
      </a>
    </div>
    <p>The Workers Developer Experience team has also been shipping some features related to Workers KV. Specifically, you can fully interact with your namespaces and the key/value pairs inside of them.</p><p>For example, my personal website is running on Workers Sites. I have a Wrangler project named “website” to manage it. If I wanted to add another namespace, I could do this:</p>
            <pre><code>$ wrangler kv:namespace create new_namespace
Creating namespace with title "website-new_namespace"
Success: WorkersKvNamespace {
    id: "&lt;id&gt;",
    title: "website-new_namespace",
}

Add the following to your wrangler.toml:

kv-namespaces = [
    { binding = "new_namespace", id = "&lt;id&gt;" }
]</code></pre>
            <p>I’ve redacted the namespace IDs here, but Wrangler let me know that the creation was successful, and provided me with the configuration I need to put in my <code>wrangler.toml</code>. Once I’ve done that, I can add new key/value pairs:</p>
            <pre><code>$ wrangler kv:key put "hello" "world" --binding new_namespace
Success</code></pre>
            <p>And read it back out again:</p>
            <pre><code>&gt; wrangler kv:key get "hello" --binding new_namespace
world</code></pre>
            <p>If you’d like to learn more about the design of these features, <a href="/how-we-design-features-for-wrangler/">“How we design features for Wrangler, the Cloudflare Workers CLI”</a> discusses them in depth.</p>
    <div>
      <h2>More to come</h2>
      <a href="#more-to-come">
        
      </a>
    </div>
    <p>The Storage team is working hard at improving Workers KV, and we’ll keep shipping new stuff every so often. Our updates will be more regular in the future. If there’s something you’d particularly like to see, please reach out!</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers KV]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[API]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">1p7MWSJNs1jvrCZgGC1xma</guid>
            <dc:creator>Steve Klabnik</dc:creator>
        </item>
        <item>
            <title><![CDATA[Workers Sites: Deploy Your Website Directly on our Network]]></title>
            <link>https://blog.cloudflare.com/workers-sites/</link>
            <pubDate>Fri, 27 Sep 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ Performance on the web has always been a battle against the speed of light — accessing a site from London that is served from Seattle, WA means every single asset request has to travel over seven thousand miles. ]]></description>
            <content:encoded><![CDATA[ <p><b><i>Update</i></b><i>: since publishing this blog post, we've released </i><a href="/big-ideas-on-pages/"><i>Cloudflare Pages</i></a><i>. If you're using Cloudflare for </i><a href="https://www.cloudflare.com/developer-platform/solutions/hosting/"><i>hosting sites</i></a><i>, Cloudflare Pages is better suited for this use case.</i></p><p>Performance on the web has always been a battle against the speed of light — accessing a site from London that is served from Seattle, WA means every single asset request has to travel over seven thousand miles. The first breakthrough in the web performance battle was HTTP/1.1 connection keep-alive and browsers opening multiple connections. The next breakthrough was the CDN, bringing your static assets closer to your end users by <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">caching them in data centers</a> closer to them. Today, with Workers Sites, we’re excited to announce the next big breakthrough — entire sites distributed directly onto the edge of the Internet.</p>
    <div>
      <h2>Deploying to the edge of the network</h2>
      <a href="#deploying-to-the-edge-of-the-network">
        
      </a>
    </div>
    <p>Why isn’t just caching assets sufficient? Yes, caching improves performance, but significant performance improvement comes with a series of headaches. The CDN can make a guess at which assets it should cache, but that is just a guess. Configuring your site for maximum performance has always been an error-prone process, requiring a wide collection of esoteric rules and headers. Even when perfectly configured, almost nothing is cached forever, precious requests still often need to travel all the way to your origin (wherever it may be). Cache invalidation is, after all, one of the <a href="https://twitter.com/secretgeek/status/7269997868">hardest problems in computer science</a>.</p><p>This begs the question: rather than moving bytes from the origin to the edge bit by bit clumsily, why not push the whole origin to the edge?</p>
    <div>
      <h2>Workers Sites: Extending the Workers platform</h2>
      <a href="#workers-sites-extending-the-workers-platform">
        
      </a>
    </div>
    <p>Two years ago for Birthday Week, we announced <a href="https://workers.cloudflare.com/">Cloudflare Workers</a>, a way for developers to write and run JavaScript and WebAssembly on our network in 194 cities around the world. A year later, we released Workers KV, our distributed key-value store that gave developers the ability to store state at the edge in those same cities.</p><p>Workers Sites leverages the power of Workers and Workers KV by allowing developers to upload their sites directly to the edge, and closer to the end users. Born on the edge, Workers Sites is what we think modern development on the web should look like, natively secure, fast, and massively scalable. Less of your time is spent on configuration, and more of your time is spent on your code, and content itself.</p>
    <div>
      <h2>How it works</h2>
      <a href="#how-it-works">
        
      </a>
    </div>
    <p>Workers Sites are deployed with a few terminal commands, and can serve a site generated by any static site generator, such as Hugo, Gatsby or Jekyll. Using <a href="https://github.com/cloudflare/wrangler">Wrangler</a> (our CLI), you can upload your site’s assets directly into KV. When a request hits your Workers Site, the Cloudflare Worker generated by Wrangler, will read and serve the asset from KV, with the appropriate headers (no need to worry about Content-Type, and Cache-Control; we’ve got you covered).</p><p>Workers Sites can be used to deploy any static site such as a blog, marketing sites, or portfolio.  If you ever decide your site needs to become a little less static, your Worker is just code, edit and extend it until you have a dynamic site running all around the world.</p>
    <div>
      <h2>Getting started</h2>
      <a href="#getting-started">
        
      </a>
    </div>
    <p>To get started with Workers Sites, you first need to <a href="https://dash.cloudflare.com/sign-up">sign up for Workers</a>. After selecting your workers.dev subdomain, choose the Workers Unlimited plan (starting at $5 / month) to get access to Workers KV and the ability to deploy Workers Sites.</p><p>After signing up for Workers Unlimited you’ll need to install the CLI for Workers, Wrangler. Wrangler can be installed either from NPM or Cargo:</p>
            <pre><code># NPM Installation
npm i @cloudflare/wrangler -g
# Cargo Installation
cargo install wrangler</code></pre>
            <p>Once you install Wrangler, you are ready to deploy your static site, with the following steps:</p><ol><li><p>Run <code>wrangler init --site</code> in the directory that contains your static site's built assets</p></li><li><p>Fill in the newly created <code>wrangler.toml</code> file with your account and project details</p></li><li><p>Publish your site with <code>wrangler publish</code></p></li></ol><p>You can also check out our Workers Sites <a href="https://developers.cloudflare.com/workers/sites">reference documentation</a> or follow the full tutorial for <a href="https://developers.cloudflare.com/workers/tutorials/deploy-a-react-app">create-react-app</a> in the docs.</p><p>If you’d prefer to get started by watching a video, we’ve got you covered! <a href="https://watch.cloudflarestream.com/9943b400b59802b77f83a8a57f39d682">This video</a> will walk you through creating and deploying your first Workers Site:</p>

    <div>
      <h2>Blazing fast: from Atlanta to Zagreb</h2>
      <a href="#blazing-fast-from-atlanta-to-zagreb">
        
      </a>
    </div>
    <p>In addition to improving the developer experience, we did a lot of work behind the scenes making sure that both deploys and the sites themselves are blazing fast — we’re excited to share the how with you in our <a href="/extending-the-workers-platform">technical blog post</a>.</p><p>To test the performance of Workers Sites we took one of our personal sites and deployed it to run some benchmarks. This test was for our site but your results may vary.</p><p>One common way to benchmark the performance of your site using <a href="https://developers.google.com/web/tools/lighthouse">Google Lighthouse</a>, which you can do directly from the Audits tab of your Chrome browser.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1UOVn62zLc7zc549glFiTO/ceeab2a5a7bd731aa28d6020855beee1/image1-7.png" />
            
            </figure><p>So we passed the first test with flying colors — 100! However, running a benchmark from your own computer introduces a bias: your users are not necessarily where you are. In fact, your users are increasingly <i>not</i> where you are.</p><p>Where you’re benchmarking from is really important: running tests from different locations will yield different results. Benchmarking from Seattle and hitting a server on the West coast says very little about your global performance.</p><p>We decided to use a tool called Catchpoint to run benchmarks from cities around the world. To see how we compare, we deployed the site to three different static site deployment platforms including Workers Sites.</p><p>Since providers offer data center regions on the coasts of the United States, or Central Europe, it’s common to see good performance in regions such as North America, and we’ve got you covered here:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7C2l8XZO7Zb541CMMGGlmp/f9c75d30557d0b5f6b3e1c78a0a48994/Screen-Shot-2019-09-26-at-10.58.41-PM.png" />
            
            </figure><p>But what about your users in the rest of the world? Performance is even more critical in those regions: the first users are not going to be connecting to your site on a MacBook Pro, on a blazing fast connection. Workers Sites allows you to reach those regions without any additional effort on your part — every time <a href="/scaling-the-cloudflare-global/">our map grows</a>, your global presence grows with it.</p><p>We’ve done the work of running some benchmarks from different parts of the world for you, and we’re pleased to share the results:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6MWlNMQWy0ithl7CEi7C9o/2e5ce64bf0d018b720968f17e07c46dd/Screen-Shot-2019-09-26-at-10.58.24-PM.png" />
            
            </figure>
    <div>
      <h2>One last thing...</h2>
      <a href="#one-last-thing">
        
      </a>
    </div>
    <p>Deploying your next site with Workers Sites is easy and leads to great performance, so we thought it was only right that we deploy with Workers Sites ourselves. With this announcement, we are also open sourcing the <a href="https://developers.cloudflare.com/workers">Cloudflare Workers docs</a>! And, they are now served from a Cloudflare data center near you using Workers Sites.</p><p>We can’t wait to see what you deploy with <a href="https://workers.cloudflare.com/sites">Workers Sites</a>!</p><p><i>Have you built something interesting with Workers or Workers Sites? Let us know </i><a href="https://twitter.com/CloudflareDev"><i>@CloudflareDev</i></a><i>!</i></p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Cloudflare Workers KV]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">3siPXgl85litpEz6zHNaDg</guid>
            <dc:creator>Rita Kozlov</dc:creator>
        </item>
        <item>
            <title><![CDATA[How We Design Features for Wrangler, the Cloudflare Workers CLI]]></title>
            <link>https://blog.cloudflare.com/how-we-design-features-for-wrangler/</link>
            <pubDate>Tue, 17 Sep 2019 15:55:00 GMT</pubDate>
            <description><![CDATA[ The most recent update to Wrangler, version 1.3.1, introduces important new features for developers building Cloudflare Workers — from built-in deployment environments to first class support for Workers KV. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>The most recent update to Wrangler, <a href="https://github.com/cloudflare/wrangler/releases/tag/v1.3.1">version 1.3.1</a>, introduces important new features for developers building <a href="https://workers.cloudflare.com">Cloudflare Workers</a> — from built-in deployment environments to first class support for <a href="https://developers.cloudflare.com/workers/reference/storage/overview/">Workers KV</a>. Wrangler is Cloudflare’s first officially supported CLI. Branching into this field of software has been a novel experience for us engineers and product folks on the Cloudflare Workers team.</p><p>As part of the 1.3.1 release, the folks on the Workers Developer Experience team dove into the thought process that goes into building out features for a CLI and thinking like users. Because while we wish building a CLI were as easy as our teammate Avery tweeted...</p><blockquote><p>If I were programming a CLI, I would simply design it in a way that is not controversial and works for every type of user.</p><p>— avery harnish (@SmoothAsSkippy) <a href="https://twitter.com/SmoothAsSkippy/status/1166842441711980546?ref_src=twsrc%5Etfw">August 28, 2019</a></p></blockquote><p>… it brings design challenges that many of us have never encountered. To overcome these challenges successfully requires deep empathy for users across the entire team, as well as the ability to address ambiguous questions related to how developers write Workers.</p>
    <div>
      <h2>Wrangler, meet Workers KV</h2>
      <a href="#wrangler-meet-workers-kv">
        
      </a>
    </div>
    <p>Our new KV functionality introduced a host of new features, from creating KV namespaces to bulk uploading key-value pairs for use within a Worker. This new functionality primarily consisted of logic for interacting with the Workers KV API, meaning that the technical work under “the hood” was relatively straightforward. Figuring out how to cleanly represent these new features to Wrangler users, however, became the fundamental question of this release.</p><p>Designing the invocations for new KV functionality unsurprisingly required multiple iterations, and taught us a lot about usability along the way!</p>
    <div>
      <h2>Attempt 1</h2>
      <a href="#attempt-1">
        
      </a>
    </div>
    <p>For our initial pass, the path originally seemed so obvious. (Narrator: It really, really wasn’t). We hypothesized that having Wrangler support familiar commands — like ls and rm — would be a reasonable mapping of familiar command line tools to Workers KV, and ended up with the following set of invocations below:</p>
            <pre><code># creates a new KV Namespace
$ wrangler kv add myNamespace									
	
# sets a string key that doesn't expire
$ wrangler kv set myKey=”someStringValue”

# sets many keys
$ wrangler kv set myKey=”someStringValue” myKey2=”someStringValue2” ...

# sets a volatile (expiring) key that expires in 60 s
$ wrangler kv set myVolatileKey=path/to/value --ttl 60s

# deletes three keys
$ wrangler kv rm myNamespace myKey1 myKey2 myKey3

# lists all your namespaces
$ wrangler kv ls

# lists all the keys for a namespace
$ wrangler kv ls myNamespace

# removes all keys from a namespace, then removes the namespace		
$ wrangler kv rm -r myNamespace</code></pre>
            <p>While these commands invoked familiar shell utilities, they made interacting with your KV namespace a lot more like interacting with a filesystem than a key value store. The juxtaposition of a well-known command like <code>ls</code> with a non-command, <code>set</code>, was confusing. Additionally, mapping preexisting command line tools to KV actions was not a good 1-1 mapping (especially for <code>rm -r</code>; there is no need to recursively delete a KV namespace like a directory if you can just delete the namespace!)</p><p>This draft also surfaced use cases we needed to support: namely, we needed support for actions like easy bulk uploads from a file. This draft required users to enter every KV pair in the command line instead of reading from a file of key-value pairs; this was also a non-starter.</p><p>Finally, these KV subcommands caused confusion about actions to different resources. For example, the command for listing your Workers KV namespaces looked a lot like the command for listing keys within a namespace.</p><p>Going forward, we needed to meet these newly identified needs.</p>
    <div>
      <h2>Attempt 2</h2>
      <a href="#attempt-2">
        
      </a>
    </div>
    <p>Our next attempt shed the shell utilities in favor of simple, declarative subcommands like <code>create</code>, <code>list</code>, and <code>delete</code>. It also addressed the need for easy-to-use bulk uploads by allowing users to pass a JSON file of keys and values to Wrangler.</p>
            <pre><code># create a namespace
$ wrangler kv create namespace &lt;title&gt;

# delete a namespace
$ wrangler kv delete namespace &lt;namespace-id&gt;

# list namespaces
$ wrangler kv list namespace

# write key-value pairs to a namespace, with an optional expiration flag
$ wrangler kv write key &lt;namespace-id&gt; &lt;key&gt; &lt;value&gt; --ttl 60s

# delete a key from a namespace
$ wrangler kv delete key &lt;namespace-id&gt; &lt;key&gt;

# list all keys in a namespace
$ wrangler kv list key &lt;namespace-id&gt;

# write bulk kv pairs. can be json file or directory; if dir keys will be file paths from root, value will be contents of files
$ wrangler kv write bulk ./path/to/assets

# delete bulk pairs; same input functionality as above
$ wrangler kv delete bulk ./path/to/assets</code></pre>
            <p>Given the breadth of new functionality we planned to introduce, we also built out a taxonomy of new subcommands to ensure that invocations for different resources — namespaces, keys, and bulk sets of key-value pairs — were consistent:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2UyT0JmQEJSq3ejZTtQkaO/e12c5b3fd7b93b48004067ef096a30cf/Attempt2Diagram.png" />
            
            </figure><p>Designing invocations with taxonomies became a crucial part of our development process going forward, and gave us a clear look at the “big picture” of our new KV features.</p><p>This approach was closer to what we wanted. It offered bulk put and bulk delete operations that would read multiple key-value pairs from a JSON file. After specifying an action subcommand (e.g. <code>delete</code>), users now explicitly stated which resource an action applied to (<code>namespace</code> , <code>key</code>, <code>bulk</code>) and reduced confusion about which action applied to which KV component.</p><p>This draft, however, was still not as explicit as we wanted it to be. The distinction between operations on <code>namespaces</code> versus <code>keys</code> was not as obvious as we wanted, and we still feared the possibility of different <code>delete</code> operations accidentally producing unwanted deletes (a possibly disastrous outcome!)</p>
    <div>
      <h2>Attempt 3</h2>
      <a href="#attempt-3">
        
      </a>
    </div>
    <p>We really wanted to help differentiate where in the hierarchy of structs a user was operating at any given time. Were they operating on <code>namespaces</code>, <code>keys</code>, or <code>bulk</code> sets of keys in a given operation, and how could we make that as clear as possible? We looked around, comparing the ways CLIs from kubectl to Heroku’s handled commands affecting different objects. We landed on a pleasing pattern inspired by Heroku’s CLI: colon-delimited command namespacing:</p>
            <pre><code>plugins:install PLUGIN    # installs a plugin into the CLI
plugins:link [PATH]       # links a local plugin to the CLI for development
plugins:uninstall PLUGIN  # uninstalls or unlinks a plugin
plugins:update            # updates installed plugins</code></pre>
            <p>So we adopted <code>kv:namespace</code>, <code>kv:key</code>, and <code>kv:bulk</code> to semantically separate our commands:</p>
            <pre><code># namespace commands operate on namespaces
$ wrangler kv:namespace create &lt;title&gt; [--env]
$ wrangler kv:namespace delete &lt;binding&gt; [--env]
$ wrangler kv:namespace rename &lt;binding&gt; &lt;new-title&gt; [--env]
$ wrangler kv:namespace list [--env]
# key commands operate on individual keys
$ wrangler kv:key write &lt;binding&gt; &lt;key&gt;=&lt;value&gt; [--env | --ttl | --exp]
$ wrangler kv:key delete &lt;binding&gt; &lt;key&gt; [--env]
$ wrangler kv:key list &lt;binding&gt; [--env]
# bulk commands take a user-generated JSON file as an argument
$ wrangler kv:bulk write &lt;binding&gt; ./path/to/data.json [--env]
$ wrangler kv:bulk delete &lt;binding&gt; ./path/to/data.json [--env]</code></pre>
            <p>And ultimately ended up with this topology:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1lL9YrgjMs0VX3vXm31lcP/fd71a9cee1e23752913113bed2960d5b/Attempt3Diagram.png" />
            
            </figure><p>We were even closer to our desired usage pattern; the object acted upon was explicit to users, and the action applied to the object was also clear.</p><p>There was one usage issue left. Supplying <code>namespace-id</code>s--a field that specifies which Workers KV namespace to perform an action to--required users to get their clunky KV <code>namespace-id</code> (a string like <code>06779da6940b431db6e566b4846d64db</code>) and provide it in the command-line under the <code>namespace-id</code> option. This <code>namespace-id</code> value is what our Workers KV API expects in requests, but would be cumbersome for users to dig up and provide, let alone frequently use.</p><p>The solution we came to takes advantage of the <code>wrangler.toml</code> present in every Wrangler-generated Worker. To publish a Worker that uses a Workers KV store, the following field is needed in the Worker’s <code>wrangler.toml</code>:</p>
            <pre><code>kv-namespaces = [
	{ binding = "TEST_NAMESPACE", id = "06779da6940b431db6e566b4846d64db" }
]</code></pre>
            <p>This field specifies a Workers KV namespace that is bound to the name <code>TEST_NAMESPACE</code>, such that a Worker script can access it with logic like:</p>
            <pre><code>TEST_NAMESPACE.get(“my_key”);</code></pre>
            <p>We also decided to take advantage of this <code>wrangler.toml</code> field to allow users to specify a KV binding name instead of a KV namespace id. Upon providing a KV binding name, Wrangler could look up the associated <code>id</code> in <code>wrangler.toml</code> and use that for Workers KV API calls.</p><p>Wrangler users performing actions on KV namespaces could simply provide <code>--binding TEST_NAMESPACE</code> for their KV calls let Wrangler retrieve its ID from <code>wrangler.toml</code>. Users can still specify <code>--namespace-id</code> directly if they do not have namespaces specified in their <code>wrangler.toml</code>.</p><p>Finally, we reached our happy point: Wrangler’s new KV subcommands were explicit, offered functionality for both individual and bulk actions with Workers KV, and felt ergonomic for Wrangler users to integrate into their day-to-day operations.</p>
    <div>
      <h2>Lessons Learned</h2>
      <a href="#lessons-learned">
        
      </a>
    </div>
    <p>Throughout this design process, we identified the following takeaways to carry into future Wrangler work:</p><ol><li><p><b>Taxonomies of your CLI’s subcommands and invocations are a great way to ensure consistency and clarity</b>. CLI users tend to anticipate similar semantics and workflows within a CLI, so visually documenting all paths for the CLI can greatly help with identifying where new work can be consistent with older semantics. Drawing out these taxonomies can also expose missing features that seem like a fundamental part of the “big picture” of a CLI’s functionality.</p></li><li><p><b>Use other CLIs for inspiration and validation</b>. Drawing logic from popular CLIs helped us confirm our assumptions about what users like, and learn established patterns for complex CLI invocations.</p></li><li><p><b>Avoid logic that requires passing in raw ID strings</b>. Testing CLIs a lot means that remembering and re-pasting ID values gets very tedious very quickly. Emphasizing a set of purely human-readable CLI commands and arguments makes for a far more intuitive experience. When possible, taking advantage of configuration files (like we did with <code>wrangler.toml</code>) offers a straightforward way to provide mappings of human-readable names to complex IDs.</p></li></ol><p>We’re excited to continue using these design principles we’ve learned and documented as we grow Wrangler into a one-stop <a href="https://workers.cloudflare.com">Cloudflare Workers</a> shop.</p><p>If you’d like to try out Wrangler, <a href="https://github.com/cloudflare/wrangler">check it out on GitHub</a> and let us know what you think! We would love your feedback.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5CFtKpaVci96ko4xuzfFSW/1ccc3afb9f1fa313c89ed82769bd0743/WranglerCrab-1.png" />
            
            </figure> ]]></content:encoded>
            <category><![CDATA[Wrangler]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Cloudflare Workers KV]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">4ezmggl8GGox1Mv6DWV62H</guid>
            <dc:creator>Ashley M Lewis</dc:creator>
            <dc:creator>Gabbi Fisher</dc:creator>
        </item>
        <item>
            <title><![CDATA[Join Cloudflare & Moz at our next meetup, Serverless in Seattle!]]></title>
            <link>https://blog.cloudflare.com/join-cloudflare-moz-at-our-next-meetup-serverless-in-seattle/</link>
            <pubDate>Mon, 24 Jun 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare is organizing a meetup in Seattle on Tuesday, June 25th and we hope you can join. We’ll be bringing together members of the developers community and Cloudflare users for an evening of discussion about serverless compute and the infinite number of use cases for deploying code at the edge.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Photo by <a href="https://unsplash.com/@jetcityninja?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">oakie</a> / <a href="https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Unsplash</a></p><p>Cloudflare is organizing a <a href="https://www.cloudflare.com/events/seattle-customer-meetup-june2019/">meetup in Seattle</a> on Tuesday, June 25th and we hope you can join. We’ll be bringing together members of the developers community and Cloudflare users for an evening of discussion about serverless compute and the infinite number of use cases for deploying code at the edge.</p><p>To kick things off, our guest speaker <a href="https://moz.com/about/team/devin">Devin Ellis</a> will share how <a href="https://moz.com/"><b>Moz</b></a> <b>uses Cloudflare</b> <a href="https://www.cloudflare.com/products/cloudflare-workers/"><b>Workers</b></a> <b>to reduce time to first byte 30-70% by</b> <a href="https://www.cloudflare.com/learning/cdn/caching-static-and-dynamic-content/"><b>caching dynamic content</b></a> <b>at the edge.</b> Kirk Schwenkler, Solutions Engineering Lead at Cloudflare, will facilitate this discussion and share his perspective on how to grow and secure businesses at scale.</p><p>Next up, Developer Advocate <a href="https://dev.to/signalnerve">Kristian Freeman</a> will take you through a live demo of Workers and highlight <a href="https://dev.to/cloudflareworkers/a-brief-guide-to-what-s-new-with-cloudflare-workers-di8">new features</a> of the platform. This will be an interactive session where you can try out Workers for free and develop your own applications using our new command-line tool.</p><p>Food and drinks will be served til close so grab your laptop and a friend and come on by!</p><p><a href="https://www.cloudflare.com/events/seattle-customer-meetup-june2019/"><b>View Event Details &amp; Register Here</b></a></p><p>Agenda:</p><p>

<ul>
    <li><strong>5:00 pm</strong> Doors open, food and drinks
    </li><li><strong>5:30 pm</strong> Customer use case by Devin and Kirk
	</li><li><strong>6:00 pm</strong> Workers deep dive with Kristian
    </li><li><strong>6:30 - 8:30 pm</strong> Networking, food and drinks    
</li></ul>


</p> ]]></content:encoded>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Cloudflare Workers KV]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Programming]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Meetups]]></category>
            <category><![CDATA[MeetUp]]></category>
            <category><![CDATA[Events]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">kiRSgU1smLNWrHFcDBuId</guid>
            <dc:creator>Giuliana DeAngelis</dc:creator>
        </item>
        <item>
            <title><![CDATA[The Serverlist: Connecting the Serverless Ecosystem]]></title>
            <link>https://blog.cloudflare.com/serverlist-5th-edition/</link>
            <pubDate>Fri, 24 May 2019 16:46:52 GMT</pubDate>
            <description><![CDATA[ Check out our 5th edition of The Serverlist below. Get the latest scoop on the serverless space, get your hands dirty with new developer tutorials, engage in conversations with other serverless developers, and find upcoming meetups and conferences to attend. ]]></description>
            <content:encoded><![CDATA[ <p>Check out our fifth edition of The Serverlist below. Get the latest scoop on the serverless space, get your hands dirty with new developer tutorials, engage in conversations with other serverless developers, and find upcoming meetups and conferences to attend.</p><p>Sign up below to have The Serverlist sent directly to your mailbox.</p>


 ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Cloudflare Workers KV]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Events]]></category>
            <category><![CDATA[The Serverlist Newsletter]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">7oXE2pL7Ns0jsAzgIu3Oaz</guid>
            <dc:creator>Connor Peshek</dc:creator>
            <dc:creator>Andrew Fitch</dc:creator>
        </item>
        <item>
            <title><![CDATA[Building a To-Do List with Workers and KV]]></title>
            <link>https://blog.cloudflare.com/building-a-to-do-list-with-workers-and-kv/</link>
            <pubDate>Tue, 21 May 2019 13:30:00 GMT</pubDate>
            <description><![CDATA[ In this tutorial, we’ll build a todo list application in HTML, CSS and JavaScript, with a twist: all the data should be stored inside of the newly-launched Workers KV, and the application itself should be served directly from Cloudflare’s edge network, using Cloudflare Workers. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>In this tutorial, we’ll build a todo list application in HTML, CSS and JavaScript, with a twist: all the data should be stored inside of the newly-launched Workers KV, and the application itself should be served directly from Cloudflare’s edge network, using <a href="https://www.cloudflare.com/products/cloudflare-workers/">Cloudflare Workers</a>.</p><p>To start, let’s break this project down into a couple different discrete steps. In particular, it can help to focus on the constraint of working with Workers KV, as handling data is generally the most complex part of building an application:</p><ol><li><p>Build a todos data structure</p></li><li><p>Write the todos into Workers KV</p></li><li><p>Retrieve the todos from Workers KV</p></li><li><p>Return an HTML page to the client, including the todos (if they exist)</p></li><li><p>Allow creation of new todos in the UI</p></li><li><p>Allow completion of todos in the UI</p></li><li><p>Handle todo updates</p></li></ol><p>This task order is pretty convenient, because it’s almost perfectly split into two parts: first, understanding the Cloudflare/API-level things we need to know about Workers <i>and</i> KV, and second, actually building up a user interface to work with the data.</p>
    <div>
      <h3>Understanding Workers</h3>
      <a href="#understanding-workers">
        
      </a>
    </div>
    <p>In terms of implementation, a great deal of this project is centered around KV - although that may be the case, it’s useful to break down <i>what</i> Workers are exactly.</p><p>Service Workers are background scripts that run in your browser, alongside your application. Cloudflare Workers are the same concept, but super-powered: your Worker scripts run on Cloudflare’s edge network, in-between your application and the client’s browser. This opens up a huge amount of opportunity for interesting integrations, especially considering the network’s massive scale around the world. Here’s some of the use-cases that I think are the most interesting:</p><ol><li><p>Custom security/filter rules to block bad actors before they ever reach the origin</p></li><li><p>Replacing/augmenting your website’s content based on the request content (i.e. user agents and other headers)</p></li><li><p>Caching requests to improve performance, or using Cloudflare KV to optimize high-read tasks in your application</p></li><li><p>Building an application <i>directly</i> on the edge, removing the dependence on origin servers entirely</p></li></ol><p>For this project, we’ll lean heavily towards the latter end of that list, building an application that clients communicate with, served on Cloudflare’s edge network. This means that it’ll be globally available, with low-latency, while still allowing the ease-of-use in building applications directly in JavaScript.</p>
    <div>
      <h3>Setting up a canvas</h3>
      <a href="#setting-up-a-canvas">
        
      </a>
    </div>
    <p>To start, I wanted to approach this project from the bare minimum: no frameworks, JS utilities, or anything like that. In particular, I was most interested in writing a project from scratch and serving it directly from the edge. Normally, I would deploy a site to something like <a href="https://pages.github.com/">GitHub Pages</a>, but avoiding the need for an origin server altogether seems like a really powerful (and performant idea) - let’s try it!</p><p>I also considered using <a href="https://todomvc.com/">TodoMVC</a> as the blueprint for building the functionality for the application, but even the <a href="http://todomvc.com/examples/vanillajs/#/">Vanilla JS</a> version is a pretty impressive amount of <a href="https://github.com/tastejs/todomvc/tree/gh-pages/examples/vanillajs">code</a>, including a number of Node packages - it wasn’t exactly a concise chunk of code to just dump into the Worker itself.</p><p>Instead, I decided to approach the beginnings of this project by building a simple, blank HTML page, and including it inside of the Worker. To start, we’ll sketch something out locally, like this:</p>
            <pre><code>&lt;!DOCTYPE html&gt;
&lt;html&gt;
  &lt;head&gt;
    &lt;meta charset="UTF-8"&gt;
    &lt;meta name="viewport" content="width=device-width,initial-scale=1"&gt;
    &lt;title&gt;Todos&lt;/title&gt;
  &lt;/head&gt;
  &lt;body&gt;
    &lt;h1&gt;Todos&lt;/h1&gt;
  &lt;/body&gt;
&lt;/html&gt;</code></pre>
            <p>Hold on to this code - we’ll add it later, inside of the Workers script. For the purposes of the tutorial, I’ll be serving up this project at <a href="http://todo.kristianfreeman.com/">todo.kristianfreeman.com</a>. My personal website was already <a href="https://www.cloudflare.com/developer-platform/solutions/hosting/">hosted on Cloudflare</a>, and since I’ll be serving, it was time to create my first Worker.</p>
    <div>
      <h3>Creating a worker</h3>
      <a href="#creating-a-worker">
        
      </a>
    </div>
    <p>Inside of my Cloudflare account, I hopped into the Workers tab and launched the Workers editor.</p><p>This is one of my favorite features of the editor - working with your actual website, understanding <i>how</i> the worker will interface with your existing project.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/121XS5g0iERXCdVwilld5G/8fe0b9a7e7b869aaf2b5d5bba3c51d98/image4-2.png" />
            
            </figure><p>The process of writing a Worker should be familiar to anyone who’s used the fetch library before. In short, the default code for a Worker hooks into the <code>fetch</code> event, passing the request of that event into a custom function, <code>handleRequest</code>:</p>
            <pre><code>addEventListener('fetch', event =&gt; {
  event.respondWith(handleRequest(event.request))
})</code></pre>
            <p>Within <code>handleRequest</code>, we make the actual request, using <code>fetch</code>, and return the response to the client. In short, we have a place to intercept the response body, but by default, we let it pass-through:</p>
            <pre><code>async function handleRequest(request) {
  console.log('Got request', request)
  const response = await fetch(request)
  console.log('Got response', response)
  return response
}</code></pre>
            <p>So, given this, where do we begin actually <i>doing stuff</i> with our worker?</p><p>Unlike the default code given to you in the Workers interface, we want to skip fetching the incoming request: instead, we’ll construct a new <code>Response</code>, and serve it directly from the edge:</p>
            <pre><code>async function handleRequest(request) {
  const response = new Response("Hello!")
  return response
}</code></pre>
            <p>Given that very small functionality we’ve added to the worker, let’s deploy it. Moving into the “Routes” tab of the Worker editor, I added the route <code>https://todo.kristianfreeman.com/*</code> and attached it to the cloudflare-worker-todos script.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/29QYuFrFiwZKVwZHkWN90v/851c0e9b95c03badc9eccbee41079a20/image5.png" />
            
            </figure><p>Once attached, I deployed the worker, and voila! Visiting todo.kristianfreeman.com in-browser gives me my simple “Hello!” response back.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/Boo3xWgZdHEDHgpUgIrKy/594ae6a988d40a13253af74937d9a964/Screen-Shot-2019-05-15-at-10.12.04-AM.png" />
            
            </figure>
    <div>
      <h3>Writing data to KV</h3>
      <a href="#writing-data-to-kv">
        
      </a>
    </div>
    <p>The next step is to populate our todo list with actual data. To do this, we’ll make use of Cloudflare’s Workers KV - it’s a simple key-value store that you can access inside of your Worker script to read (and write, although it’s less common) data.</p><p>To get started with KV, we need to set up a “namespace”. All of our cached data will be stored inside that namespace, and given just a bit of configuration, we can access that namespace inside the script with a predefined variable.</p><p>I’ll create a new namespace in the Workers dashboard, called <code>KRISTIAN_TODOS</code>, and in the Worker editor, I’ll expose the namespace by binding it to the variable <code>KRISTIAN_TODOS</code>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4i4hCpAgNXFTWxg8X2CXW2/8f67cbecd052f177ca02ce089b511e65/image1-6.png" />
            
            </figure><p>Given the presence of <code>KRISTIAN_TODOS</code> in my script, it’s time to understand the KV API. At time of writing, a KV namespace has three primary methods you can use to interface with your cache: <code>get</code>, <code>put</code>, and <code>delete</code>. Pretty straightforward!</p><p>Let’s start storing data by defining an initial set of data, which we’ll put inside of the cache using the put method. I’ve opted to define an object, <code>defaultData</code>, instead of a simple array of todos: we may want to store metadata and other information inside of this cache object later on. Given that data object, I’ll use <code>JSON.stringify</code> to put a simple string into the cache:</p>
            <pre><code>async function handleRequest(request) {
  // ...previous code
  
  const defaultData = { 
    todos: [
      {
        id: 1,
        name: 'Finish the Cloudflare Workers blog post',
          completed: false
      }
    ] 
  }
  KRISTIAN_TODOS.put("data", JSON.stringify(defaultData))
}
</code></pre>
            <p>The Worker KV data store is <i>eventually</i> consistent: writing to the cache means that it will become available <i>eventually</i>, but it’s possible to attempt to read a value back from the cache immediately after writing it, only to find that the cache hasn’t been updated yet.</p><p>Given the presence of data in the cache, and the assumption that our cache is eventually consistent, we should adjust this code slightly: first, we should actually read from the cache, parsing the value back out, and using it as the data source if exists. If it doesn’t, we’ll refer to <code>defaultData</code>, setting it as the data source <i>for now</i> (remember, it should be set in the future… <i>eventually</i>), while also setting it in the cache for future use. After breaking out the code into a few functions for simplicity, the result looks like this:</p>
            <pre><code>const defaultData = { 
  todos: [
    {
      id: 1,
      name: 'Finish the Cloudflare Workers blog post',
      completed: false
    }
  ] 
}

const setCache = data =&gt; KRISTIAN_TODOS.put("data", data)
const getCache = () =&gt; KRISTIAN_TODOS.get("data")

async function getTodos(request) {
  // ... previous code
  
  let data;
  const cache = await getCache()
  if (!cache) {
    await setCache(JSON.stringify(defaultData))
    data = defaultData
  } else {
    data = JSON.parse(cache)
  }
}</code></pre>
            
    <div>
      <h3>Rendering data from KV</h3>
      <a href="#rendering-data-from-kv">
        
      </a>
    </div>
    <p>Given the presence of data in our code, which is the cached data object for our application, we should actually take this data and make it available on screen.</p><p>In our Workers script, we’ll make a new variable, <code>html</code>, and use it to build up a static HTML template that we can serve to the client. In <code>handleRequest</code>, we can construct a new <code>Response</code> (with a <code>Content-Type</code> header of <code>text/html</code>), and serve it to the client:</p>
            <pre><code>const html = `
&lt;!DOCTYPE html&gt;
&lt;html&gt;
  &lt;head&gt;
    &lt;meta charset="UTF-8"&gt;
    &lt;meta name="viewport" content="width=device-width,initial-scale=1"&gt;
    &lt;title&gt;Todos&lt;/title&gt;
  &lt;/head&gt;
  &lt;body&gt;
    &lt;h1&gt;Todos&lt;/h1&gt;
  &lt;/body&gt;
&lt;/html&gt;
`

async function handleRequest(request) {
  const response = new Response(html, {
    headers: { 'Content-Type': 'text/html' }
  })
  return response
}</code></pre>
            
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6JYK6WXof6rGxmk6ovyH27/77f4df33d945cf673df4510bf650b16b/Screen-Shot-2019-05-15-at-10.06.57-AM.png" />
            
            </figure><p>We have a static HTML site being rendered, and now we can begin populating it with data! In the body, we’ll add a <code>ul</code> tag with an id of <code>todos</code>:</p>
            <pre><code>&lt;body&gt;
  &lt;h1&gt;Todos&lt;/h1&gt;
  &lt;ul id="todos"&gt;&lt;/ul&gt;
&lt;/body&gt;</code></pre>
            <p>Given that body, we can also add a script <i>after</i> the body that takes a todos array, loops through it, and for each todo in the array, creates a <code>li</code> element and appends it to the todos list:</p>
            <pre><code>&lt;script&gt;
  window.todos = [];
  var todoContainer = document.querySelector("#todos");
  window.todos.forEach(todo =&gt; {
    var el = document.createElement("li");
    el.innerText = todo.name;
    todoContainer.appendChild(el);
  });
&lt;/script&gt;</code></pre>
            <p>Our static page can take in <code>window.todos</code>, and render HTML based on it, but we haven’t actually passed in any data from KV. To do this, we’ll need to make a couple changes.</p><p>First, our <code>html</code> <i>variable</i> will change to a <i>function</i>. The function will take in an argument, <code>todos</code>, which will populate the <code>window.todos</code> variable in the above code sample:</p>
            <pre><code>const html = todos =&gt; `
&lt;!doctype html&gt;
&lt;html&gt;
  &lt;!-- ... --&gt;
  &lt;script&gt;
    window.todos = ${todos || []}
    var todoContainer = document.querySelector("#todos");
    // ...
  &lt;script&gt;
&lt;/html&gt;
`</code></pre>
            <p>In <code>handleRequest</code>, we can use the retrieved KV data to call the <code>html</code> function, and generate a <code>Response</code> based on it:</p>
            <pre><code>async function handleRequest(request) {
  let data;
  
  // Set data using cache or defaultData from previous section...
  
  const body = html(JSON.stringify(data.todos))
  const response = new Response(body, {
    headers: { 'Content-Type': 'text/html' }
  })
  return response
}</code></pre>
            <p>The finished product looks something like this:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/579BL1kiP7vJc6bxWcDD82/057188acb0323d3764e76419b13e7469/image3-3.png" />
            
            </figure>
    <div>
      <h3>Adding todos from the UI</h3>
      <a href="#adding-todos-from-the-ui">
        
      </a>
    </div>
    <p>At this point, we’ve built a Cloudflare Worker that takes data from Cloudflare KV and renders a static page based on it. That static page reads the data, and generates a todo list based on that data. Of course, the piece we’re missing is <i>creating</i> todos, from inside the UI. We know that we can add todos using the KV API - we could simply update the cache by saying <code>KRISTIAN_TODOS.put(newData)</code>, but how do we update it from inside the UI?</p><p>It’s worth noting here that Cloudflare’s Workers documentation suggests that any writes to your KV namespace happen via their API - that is, at its simplest form, a cURL statement:</p>
            <pre><code>curl "&lt;https://api.cloudflare.com/client/v4/accounts/$ACCOUNT_ID/storage/kv/namespaces/$NAMESPACE_ID/values/first-key&gt;" \
  -X PUT \
  -H "X-Auth-Email: $CLOUDFLARE_EMAIL" \
  -H "X-Auth-Key: $CLOUDFLARE_AUTH_KEY" \
  --data 'My first value!'</code></pre>
            <p>We’ll implement something similar by handling a second route in our worker, designed to watch for <code>PUT</code> requests to <code>/</code>. When a body is received at that URL, the worker will send the new todo data to our KV store.</p><p>I’ll add this new functionality to my worker, and in <code>handleRequest</code>, if the request method is a <code>PUT</code>, it will take the request body and update the cache:</p>
            <pre><code>addEventListener('fetch', event =&gt; {
  event.respondWith(handleRequest(event.request))
})

const setCache = data =&gt; KRISTIAN_TODOS.put("data", data)

async function updateTodos(request) {
  const body = await request.text()
  const ip = request.headers.get("CF-Connecting-IP")
  const cacheKey = `data-${ip}`;
  try {
    JSON.parse(body)
    await setCache(body)
    return new Response(body, { status: 200 })
  } catch (err) {
    return new Response(err, { status: 500 })
  }
}

async function handleRequest(request) {
  if (request.method === "PUT") {
    return updateTodos(request);
  } else {
    // Defined in previous code block
    return getTodos(request);
  }
}</code></pre>
            <p>The script is pretty straightforward - we check that the request is a <code>PUT</code>, and wrap the remainder of the code in a <code>try/catch</code> block. First, we parse the body of the request coming in, ensuring that it is JSON, before we update the cache with the new data, and return it to the user. If anything goes wrong, we simply return a 500. If the route is hit with an HTTP method <i>other</i> than <code>PUT</code> - that is, <code>GET</code>, <code>DELETE</code>, or anything else - we return a 404.</p><p>With this script, we can now add some “dynamic” functionality to our HTML page to actually hit this route.</p><p>First, we’ll create an input for our todo “name”, and a button for “submitting” the todo.</p>
            <pre><code>&lt;div&gt;
  &lt;input type="text" name="name" placeholder="A new todo"&gt;&lt;/input&gt;
  &lt;button id="create"&gt;Create&lt;/button&gt;
&lt;/div&gt;</code></pre>
            <p>Given that input and button, we can add a corresponding JavaScript function to watch for clicks on the button - once the button is clicked, the browser will <code>PUT</code> to <code>/</code> and submit the todo.</p>
            <pre><code>var createTodo = function() {
  var input = document.querySelector("input[name=name]");
  if (input.value.length) {
    fetch("/", { 
      method: 'PUT', 
      body: JSON.stringify({ todos: todos }) 
    });
  }
};

document.querySelector("#create")
  .addEventListener('click', createTodo);</code></pre>
            <p>This code updates the cache, but what about our local UI? Remember that the KV cache is <i>eventually consistent</i> - even if we were to update our worker to read from the cache and return it, we have no guarantees it’ll actually be up-to-date. Instead, let’s just update the list of todos locally, by taking our original code for rendering the todo list, making it a re-usable function called <code>populateTodos</code>, and calling it when the page loads <i>and</i> when the cache request has finished:</p>
            <pre><code>var populateTodos = function() {
  var todoContainer = document.querySelector("#todos");
  todoContainer.innerHTML = null;
  window.todos.forEach(todo =&gt; {
    var el = document.createElement("li");
    el.innerText = todo.name;
    todoContainer.appendChild(el);
  });
};

populateTodos();

var createTodo = function() {
  var input = document.querySelector("input[name=name]");
  if (input.value.length) {
    todos = [].concat(todos, { 
      id: todos.length + 1, 
      name: input.value,
      completed: false,
    });
    fetch("/", { 
      method: 'PUT', 
      body: JSON.stringify({ todos: todos }) 
    });
    populateTodos();
    input.value = "";
  }
};

document.querySelector("#create")
  .addEventListener('click', createTodo);</code></pre>
            <p>With the client-side code in place, deploying the new Worker should put all these pieces together. The result is an actual dynamic todo list!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3bexkIHx7QhOOnLX7m0T2a/d1cbb132f91d422682e12a169d1e3b1c/image7.gif" />
            
            </figure>
    <div>
      <h3>Updating todos from the UI</h3>
      <a href="#updating-todos-from-the-ui">
        
      </a>
    </div>
    <p>For the final piece of our (very) basic todo list, we need to be able to update todos - specifically, marking them as completed.</p><p>Luckily, a great deal of the infrastructure for this work is already in place. We can currently update the todo list data in our cache, as evidenced by our <code>createTodo</code> function. Performing updates on a todo, in fact, is much more of a client-side task than a Worker-side one!</p><p>To start, let’s update the client-side code for generating a todo. Instead of a <code>ul</code>-based list, we’ll migrate the todo container <i>and</i> the todos themselves into using <code>div</code>s:</p>
            <pre><code>&lt;!-- &lt;ul id="todos"&gt;&lt;/ul&gt; becomes... --&gt;
&lt;div id="todos"&gt;&lt;/div&gt;</code></pre>
            <p>The <code>populateTodos</code> function can be updated to generate a <code>div</code> for each todo. In addition, we’ll move the name of the todo into a child element of that <code>div</code>:</p>
            <pre><code>var populateTodos = function() {
  var todoContainer = document.querySelector("#todos");
  todoContainer.innerHTML = null;
  window.todos.forEach(todo =&gt; {
    var el = document.createElement("div");
    var name = document.createElement("span");
    name.innerText = todo.name;
    el.appendChild(name);
    todoContainer.appendChild(el);
  });
}</code></pre>
            <p>So far, we’ve designed the client-side part of this code to take an array of todos in, and given that array, render out a list of simple HTML elements. There’s a number of things that we’ve been doing that we haven’t quite had a use for, yet: specifically, the inclusion of IDs, and updating the completed value on a todo. Luckily, these things work well together, in order to support actually updating todos in the UI.</p><p>To start, it would be useful to signify the ID of each todo in the HTML. By doing this, we can then refer to the element later, in order to correspond it to the todo in the JavaScript part of our code. <a href="https://developer.mozilla.org/en-US/docs/Web/API/HTMLElement/dataset"><i>Data attributes</i></a>, and the corresponding dataset method in JavaScript, are a perfect way to implement this. When we generate our <code>div</code> element for each todo, we can simply attach a data attribute called <code>todo</code> to each <code>div</code>:</p>
            <pre><code>window.todos.forEach(todo =&gt; {
  var el = document.createElement("div");
  el.dataset.todo = todo.id
  // ... more setup

  todoContainer.appendChild(el);
});</code></pre>
            <p>Inside our HTML, each <code>div</code> for a todo now has an attached data attribute, which looks like:</p>
            <pre><code>&lt;div data-todo="1"&gt;&lt;/div&gt;
&lt;div data-todo="2"&gt;&lt;/div&gt;</code></pre>
            <p>Now we can generate a checkbox for each todo element. This checkbox will default to unchecked for new todos, of course, but we can mark it as checked as the element is rendered in the window:</p>
            <pre><code>window.todos.forEach(todo =&gt; {
  var el = document.createElement("div");
  el.dataset.todo = todo.id
  
  var name = document.createElement("span");
  name.innerText = todo.name;
  
  var checkbox = document.createElement("input")
  checkbox.type = "checkbox"
  checkbox.checked = todo.completed ? 1 : 0;

  el.appendChild(checkbox);
  el.appendChild(name);
  todoContainer.appendChild(el);
})</code></pre>
            <p>The checkbox is set up to correctly reflect the value of completed on each todo, but it doesn’t yet update when we actually check the box! To do this, we’ll add an event listener on the <code>click</code> event, calling <code>completeTodo</code>. Inside the function, we’ll inspect the checkbox element, finding its parent (the todo <code>div</code>), and using the <code>todo</code> data attribute on it to find the corresponding todo in our data. Given that todo, we can toggle the value of completed, update our data, and re-render the UI:</p>
            <pre><code>var completeTodo = function(evt) {
  var checkbox = evt.target;
  var todoElement = checkbox.parentNode;
  
  var newTodoSet = [].concat(window.todos)
  var todo = newTodoSet.find(t =&gt; 
    t.id == todoElement.dataset.todo
  );
  todo.completed = !todo.completed;
  todos = newTodoSet;
  updateTodos()
}</code></pre>
            <p>The final result of our code is a system that simply checks the <code>todos</code> variable, updates our Cloudflare KV cache with that value, and then does a straightforward re-render of the UI based on the data it has locally.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2dSK1aNSHxnICPPqdBZeS1/4612465de022714c1d7a7cc64e261d55/image8-1.png" />
            
            </figure>
    <div>
      <h3>Conclusions and next steps</h3>
      <a href="#conclusions-and-next-steps">
        
      </a>
    </div>
    <p>With this, we’ve created a pretty remarkable project: an almost entirely static HTML/JS application, transparently powered by Cloudflare KV and Workers, served at the edge. There’s a number of additions to be made to this application, whether you want to implement a better design (I’ll leave this as an exercise for readers to implement - you can see my version at <a href="https://todo.kristianfreeman.com/">todo.kristianfreeman.com</a>), security, speed, etc.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4Vif8WsAf5slR8KM3NLeGv/6cd59bcdf0c30a6df7e4bbf9eef916ba/image2-1.png" />
            
            </figure><p>One interesting and fairly trivial addition is implementing per-user caching. Of course, right now, the cache key is simply “data”: anyone visiting the site will share a todo list with any other user. Because we have the request information inside of our worker, it’s easy to make this data user-specific. For instance, implementing per-user caching by generating the cache key based on the requesting IP:</p>
            <pre><code>const ip = request.headers.get("CF-Connecting-IP")
const cacheKey = `data-${ip}`;
const getCache = key =&gt; KRISTIAN_TODOS.get(key)
getCache(cacheKey)</code></pre>
            <p>One more deploy of our Workers project, and we have a full todo list application, with per-user functionality, served at the edge!</p><p>The final version of our Workers script looks like this:</p>
            <pre><code>const html = todos =&gt; `
&lt;!DOCTYPE html&gt;
&lt;html&gt;
  &lt;head&gt;
    &lt;meta charset="UTF-8"&gt;
    &lt;meta name="viewport" content="width=device-width,initial-scale=1"&gt;
    &lt;title&gt;Todos&lt;/title&gt;
    &lt;link href="https://cdn.jsdelivr.net/npm/tailwindcss/dist/tailwind.min.css" rel="stylesheet"&gt;&lt;/link&gt;
  &lt;/head&gt;

  &lt;body class="bg-blue-100"&gt;
    &lt;div class="w-full h-full flex content-center justify-center mt-8"&gt;
      &lt;div class="bg-white shadow-md rounded px-8 pt-6 py-8 mb-4"&gt;
        &lt;h1 class="block text-grey-800 text-md font-bold mb-2"&gt;Todos&lt;/h1&gt;
        &lt;div class="flex"&gt;
          &lt;input class="shadow appearance-none border rounded w-full py-2 px-3 text-grey-800 leading-tight focus:outline-none focus:shadow-outline" type="text" name="name" placeholder="A new todo"&gt;&lt;/input&gt;
          &lt;button class="bg-blue-500 hover:bg-blue-800 text-white font-bold ml-2 py-2 px-4 rounded focus:outline-none focus:shadow-outline" id="create" type="submit"&gt;Create&lt;/button&gt;
        &lt;/div&gt;
        &lt;div class="mt-4" id="todos"&gt;&lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/body&gt;

  &lt;script&gt;
    window.todos = ${todos || []}

    var updateTodos = function() {
      fetch("/", { method: 'PUT', body: JSON.stringify({ todos: window.todos }) })
      populateTodos()
    }

    var completeTodo = function(evt) {
      var checkbox = evt.target
      var todoElement = checkbox.parentNode
      var newTodoSet = [].concat(window.todos)
      var todo = newTodoSet.find(t =&gt; t.id == todoElement.dataset.todo)
      todo.completed = !todo.completed
      window.todos = newTodoSet
      updateTodos()
    }

    var populateTodos = function() {
      var todoContainer = document.querySelector("#todos")
      todoContainer.innerHTML = null

      window.todos.forEach(todo =&gt; {
        var el = document.createElement("div")
        el.className = "border-t py-4"
        el.dataset.todo = todo.id

        var name = document.createElement("span")
        name.className = todo.completed ? "line-through" : ""
        name.innerText = todo.name

        var checkbox = document.createElement("input")
        checkbox.className = "mx-4"
        checkbox.type = "checkbox"
        checkbox.checked = todo.completed ? 1 : 0
        checkbox.addEventListener('click', completeTodo)

        el.appendChild(checkbox)
        el.appendChild(name)
        todoContainer.appendChild(el)
      })
    }

    populateTodos()

    var createTodo = function() {
      var input = document.querySelector("input[name=name]")
      if (input.value.length) {
        window.todos = [].concat(todos, { id: window.todos.length + 1, name: input.value, completed: false })
        input.value = ""
        updateTodos()
      }
    }

    document.querySelector("#create").addEventListener('click', createTodo)
  &lt;/script&gt;
&lt;/html&gt;
`

const defaultData = { todos: [] }

const setCache = (key, data) =&gt; KRISTIAN_TODOS.put(key, data)
const getCache = key =&gt; KRISTIAN_TODOS.get(key)

async function getTodos(request) {
  const ip = request.headers.get('CF-Connecting-IP')
  const cacheKey = `data-${ip}`
  let data
  const cache = await getCache(cacheKey)
  if (!cache) {
    await setCache(cacheKey, JSON.stringify(defaultData))
    data = defaultData
  } else {
    data = JSON.parse(cache)
  }
  const body = html(JSON.stringify(data.todos || []))
  return new Response(body, {
    headers: { 'Content-Type': 'text/html' },
  })
}

async function updateTodos(request) {
  const body = await request.text()
  const ip = request.headers.get('CF-Connecting-IP')
  const cacheKey = `data-${ip}`
  try {
    JSON.parse(body)
    await setCache(cacheKey, body)
    return new Response(body, { status: 200 })
  } catch (err) {
    return new Response(err, { status: 500 })
  }
}

async function handleRequest(request) {
  if (request.method === 'PUT') {
    return updateTodos(request)
  } else {
    return getTodos(request)
  }
}

addEventListener('fetch', event =&gt; {
  event.respondWith(handleRequest(event.request))
})</code></pre>
            <p>You can find the source code for this project, as well as a README with deployment instructions, on <a href="https://github.com/signalnerve/cloudflare-workers-todos">GitHub</a>.</p> ]]></content:encoded>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Cloudflare Workers KV]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Programming]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">1KZ1qTWOTEOhmuFFXT6xuC</guid>
            <dc:creator>Kristian Freeman</dc:creator>
        </item>
        <item>
            <title><![CDATA[Workers KV — Cloudflare's distributed database]]></title>
            <link>https://blog.cloudflare.com/workers-kv-is-ga/</link>
            <pubDate>Tue, 21 May 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ Today, we’re excited to announce Workers KV is entering general availability and is ready for production use! ]]></description>
            <content:encoded><![CDATA[ <p>Today, we’re excited to announce Workers KV is entering general availability and is ready for production use!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3188jggqN4vJJchIXvTReV/e9476e1e9e5783948d4255f92bcb2e47/Workers-KV-GA_2x.png" />
            
            </figure>
    <div>
      <h3>What is Workers KV?</h3>
      <a href="#what-is-workers-kv">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/products/workers-kv/">Workers KV</a> is a highly distributed, eventually consistent, key-value store that spans Cloudflare's global edge. It allows you to store billions of key-value pairs and read them with ultra-low latency anywhere in the world. Now you can build entire applications with the performance of a CDN static cache.</p>
    <div>
      <h3>Why did we build it?</h3>
      <a href="#why-did-we-build-it">
        
      </a>
    </div>
    <p><a href="https://www.cloudflare.com/products/cloudflare-workers/">Workers</a> is a platform that lets you run JavaScript on Cloudflare's global edge of 175+ data centers. With only a few lines of code, you can route HTTP requests, modify responses, or even create new responses without an origin server.</p>
            <pre><code>// A Worker that handles a single redirect,
// such a humble beginning...
addEventListener("fetch", event =&gt; {
  event.respondWith(handleOneRedirect(event.request))
})

async function handleOneRedirect(request) {
  let url = new URL(request.url)
  let device = request.headers.get("CF-Device-Type")
  // If the device is mobile, add a prefix to the hostname.
  // (eg. example.com becomes mobile.example.com)
  if (device === "mobile") {
    url.hostname = "mobile." + url.hostname
    return Response.redirect(url, 302)
  }
  // Otherwise, send request to the original hostname.
  return await fetch(request)
}</code></pre>
            <p>Customers quickly came to us with use cases that required a way to store persistent data. Following our example above, it's easy to handle a single redirect, but what if you want to handle <b>billions</b> of them? You would have to hard-code them into your Workers script, fit it all in under 1 MB, and re-deploy it every time you wanted to make a change — yikes! That’s why we built Workers KV.</p>
            <pre><code>// A Worker that can handle billions of redirects,
// now that's more like it!
addEventListener("fetch", event =&gt; {
  event.respondWith(handleBillionsOfRedirects(event.request))
})

async function handleBillionsOfRedirects(request) {
  let prefix = "/redirect"
  let url = new URL(request.url)
  // Check if the URL is a special redirect.
  // (eg. example.com/redirect/&lt;random-hash&gt;)
  if (url.pathname.startsWith(prefix)) {
    // REDIRECTS is a custom variable that you define,
    // it binds to a Workers KV "namespace." (aka. a storage bucket)
    let redirect = await REDIRECTS.get(url.pathname.replace(prefix, ""))
    if (redirect) {
      url.pathname = redirect
      return Response.redirect(url, 302)
    }
  }
  // Otherwise, send request to the original path.
  return await fetch(request)
}</code></pre>
            <p>With only a few changes from our previous example, we scaled from one redirect to billions − that's just a taste of what you can build with Workers KV.</p>
    <div>
      <h3>How does it work?</h3>
      <a href="#how-does-it-work">
        
      </a>
    </div>
    <p>Distributed data stores are often modeled using the <a href="https://en.wikipedia.org/wiki/CAP_theorem">CAP Theorem</a>, which states that distributed systems can only pick between <b>2 out of the 3</b> following guarantees:</p><ul><li><p><b>C</b>onsistency - is my data the same everywhere?</p></li><li><p><b>A</b>vailability - is my data accessible all the time?</p></li><li><p><b>P</b>artition tolerance - is my data resilient to regional outages?</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2q0UXWQCbVcPVUX9nrkB88/2c384717f1db76f9ecdf7065c08b770f/workers-kv-venn-diagram_2x-1.png" />
            
            </figure><p>Workers KV chooses to guarantee <b>A</b>vailability and <b>P</b>artition tolerance. This combination is known as <a href="https://en.wikipedia.org/wiki/Eventual_consistency">eventual consistency</a>, which presents Workers KV with two unique competitive advantages:</p><ul><li><p>Reads are ultra fast (median of 12 ms) since its powered by our caching technology.</p></li><li><p>Data is available across 175+ edge data centers and resilient to regional outages.</p></li></ul><p>Although, there are tradeoffs to eventual consistency. If two clients write different values to the same key at the same time, the last client to write <b>eventually</b> "wins" and its value becomes globally consistent. This also means that if a client writes to a key and that same client reads that same key, the values may be inconsistent for a short amount of time.</p><p>To help visualize this scenario, here's a real-life example amongst three friends:</p><ul><li><p>Suppose Matthew, Michelle, and Lee are planning their weekly lunch.</p></li><li><p>Matthew decides they're going out for sushi.</p></li><li><p>Matthew tells Michelle their sushi plans, Michelle agrees.</p></li><li><p>Lee, not knowing the plans, tells Michelle they're actually having pizza.</p></li></ul><p>An hour later, Michelle and Lee are waiting at the pizza parlor while Matthew is sitting alone at the sushi restaurant — what went wrong? We can chalk this up to eventual consistency, because after waiting for a few minutes, Matthew looks at his updated calendar and <b>eventually</b> finds the new truth, they're going out for pizza instead.</p><p>While it may take minutes in real-life, Workers KV is much faster. It can achieve global consistency in less than 60 seconds. Additionally, when a Worker writes to a key, then <b>immediately</b> reads that same key, it can expect the values to be consistent if both operations came from the same location.</p>
    <div>
      <h3>When should I use it?</h3>
      <a href="#when-should-i-use-it">
        
      </a>
    </div>
    <p>Now that you understand the benefits and tradeoffs of using eventual consistency, how do you determine if it's the right storage solution for your application? Simply put, if you want global availability with ultra-fast reads, Workers KV is right for you.</p><p>However, if your application is <b>frequently</b> writing to the <b>same</b> key, there is an additional consideration. We call it "the Matthew question": Are you okay with the Matthews of the world <b>occasionally</b> going to the wrong restaurant?</p><p>You can imagine use cases (like our redirect Worker example) where this doesn't make any material difference. But if you decide to keep track of a user’s bank account balance, you would not want the possibility of two balances existing at once, since they could purchase something with money they’ve already spent.</p>
    <div>
      <h3>What can I build with it?</h3>
      <a href="#what-can-i-build-with-it">
        
      </a>
    </div>
    <p>Here are a few examples of applications that have been built with KV:</p><ul><li><p>Mass redirects - handle billions of HTTP redirects.</p></li><li><p>User authentication - validate user requests to your API.</p></li><li><p>Translation keys - dynamically localize your web pages.</p></li><li><p>Configuration data - manage who can access your origin.</p></li><li><p>Step functions - sync state data between multiple APIs functions.</p></li><li><p>Edge file store - <a href="https://www.cloudflare.com/developer-platform/solutions/hosting/">host</a> large amounts of small files.</p></li></ul><p>We’ve highlighted several of those <a href="/building-with-workers-kv/">use cases</a> in our previous blog post. We also have some more in-depth code walkthroughs, including a recently published blog post on how to build an online <a href="/building-a-to-do-list-with-workers-and-kv/">To-do list with Workers KV</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3oQnAPwVXWgsjCuSXuYL7q/755458cfb741e45f78c468a4729f7fc7/GQ4hrfQ.png" />
            
            </figure>
    <div>
      <h3>What's new since beta?</h3>
      <a href="#whats-new-since-beta">
        
      </a>
    </div>
    <p>By far, our most common request was to make it easier to write data to Workers KV. That's why we're releasing three new ways to make that experience even better:</p>
    <div>
      <h4>1. Bulk Writes</h4>
      <a href="#1-bulk-writes">
        
      </a>
    </div>
    <p>If you want to import your existing data into Workers KV, you don't want to go through the hassle of sending an HTTP request for <b>every</b> key-value pair. That's why we added a <a href="https://api.cloudflare.com/#workers-kv-namespace-write-multiple-key-value-pairs">bulk endpoint</a> to the Cloudflare API. Now you can upload up to 10,000 pairs (up to 100 MB of data) in a single PUT request.</p>
            <pre><code>curl "https://api.cloudflare.com/client/v4/accounts/ \
     $ACCOUNT_ID/storage/kv/namespaces/$NAMESPACE_ID/bulk" \
  -X PUT \
  -H "X-Auth-Key: $CLOUDFLARE_AUTH_KEY" \
  -H "X-Auth-Email: $CLOUDFLARE_AUTH_EMAIL" \
  -d '[
    {"key": "built_by",    value: "kyle, alex, charlie, andrew, and brett"},
    {"key": "reviewed_by", value: "joaquin"},
    {"key": "approved_by", value: "steve"}
  ]'</code></pre>
            <p>Let's walk through an example use case: you want to off-load your website translation to Workers. Since you're reading translation keys frequently and only occasionally updating them, this application works well with the eventual consistency model of Workers KV.</p><p>In this example, we hook into <a href="https://crowdin.com/">Crowdin</a>, a popular platform to manage translation data. This Worker responds to a <code>/translate</code> endpoint, downloads all your translation keys, and bulk writes them to Workers KV so you can read it later on our edge:</p>
            <pre><code>addEventListener("fetch", event =&gt; {
  if (event.request.url.pathname === "/translate") {
    event.respondWith(uploadTranslations())
  }
})

async function uploadTranslations() {
  // Ask crowdin for all of our translations.
  var response = await fetch(
    "https://api.crowdin.com/api/project" +
    "/:ci_project_id/download/all.zip?key=:ci_secret_key")
  // If crowdin is responding, parse the response into
  // a single json with all of our translations.
  if (response.ok) {
    var translations = await zipToJson(response)
    return await bulkWrite(translations)
  }
  // Return the errored response from crowdin.
  return response
}

async function bulkWrite(keyValuePairs) {
  return fetch(
    "https://api.cloudflare.com/client/v4/accounts" +
    "/:cf_account_id/storage/kv/namespaces/:cf_namespace_id/bulk",
    {
      method: "PUT",
      headers: {
        "Content-Type": "application/json",
        "X-Auth-Key": ":cf_auth_key",
        "X-Auth-Email": ":cf_email"
      },
      body: JSON.stringify(keyValuePairs)
    }
  )
}

async function zipToJson(response) {
  // ... omitted for brevity ...
  // (eg. https://stuk.github.io/jszip)
  return [
    {key: "hello.EN", value: "Hello World"},
    {key: "hello.ES", value: "Hola Mundo"}
  ]
}</code></pre>
            <p>Now, when you want to translate a page, all you have to do is read from Workers KV:</p>
            <pre><code>async function translate(keys, lang) {
  // You bind your translations namespace to the TRANSLATIONS variable.
  return Promise.all(keys.map(key =&gt; TRANSLATIONS.get(key + "." + lang)))
}</code></pre>
            
    <div>
      <h4>2. Expiring Keys</h4>
      <a href="#2-expiring-keys">
        
      </a>
    </div>
    <p>By default, key-value pairs stored in Workers KV last forever. However, sometimes you want your data to auto-delete after a certain amount of time. That's why we're introducing the <code>expiration</code> and <code>expirationTtl</code>options for write operations.</p>
            <pre><code>// Key expires 60 seconds from now.
NAMESPACE.put("myKey", "myValue", {expirationTtl: 60})

// Key expires if the UNIX epoch is in the past.
NAMESPACE.put("myKey", "myValue", {expiration: 1247788800})</code></pre>
            
            <pre><code># You can also set keys to expire from the Cloudflare API.
curl "https://api.cloudflare.com/client/v4/accounts/ \
     $ACCOUNT_ID/storage/kv/namespaces/$NAMESPACE_ID/ \
     values/$KEY?expiration_ttl=$EXPIRATION_IN_SECONDS"
  -X PUT \
  -H "X-Auth-Key: $CLOUDFLARE_AUTH_KEY" \
  -H "X-Auth-Email: $CLOUDFLARE_AUTH_EMAIL" \
  -d "$VALUE"</code></pre>
            <p>Let's say you want to block users that have been flagged as inappropriate from your website, but only for a week. With an expiring key, you can set the expire time and not have to worry about deleting it later.</p><p>In this example, we assume users and IP addresses are one of the same. If your application has authentication, you could use access tokens as the key identifier.</p>
            <pre><code>addEventListener("fetch", event =&gt; {
  var url = new URL(event.request.url)
  // An internal API that blocks a new user IP.
  // (eg. example.com/block/1.2.3.4)
  if (url.pathname.startsWith("/block")) {
    var ip = url.pathname.split("/").pop()
    event.respondWith(blockIp(ip))
  } else {
    // Other requests check if the IP is blocked.
   event.respondWith(handleRequest(event.request))
  }
})

async function blockIp(ip) {
  // Values are allowed to be empty in KV,
  // we don't need to store any extra information anyway.
  await BLOCKED.put(ip, "", {expirationTtl: 60*60*24*7})
  return new Response("ok")
}

async function handleRequest(request) {
  var ip = request.headers.get("CF-Connecting-IP")
  if (ip) {
    var blocked = await BLOCKED.get(ip)
    // If we detect an IP and its blocked, respond with a 403 error.
    if (blocked) {
      return new Response({status: 403, statusText: "You are blocked!"})
    }
  }
  // Otherwise, passthrough the original request.
  return fetch(request)
}</code></pre>
            
    <div>
      <h4>3. Larger Values</h4>
      <a href="#3-larger-values">
        
      </a>
    </div>
    <p>We've increased our size limit on values from <code>64 kB</code> to <code>2 MB</code>. This is quite useful if you need to store buffer-based or file data in Workers KV.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1cYcxKnB6nfE9lUg6aIUZK/4cefcd37832fc703602f0c8aa37c33d6/workers-kc-file-size-update_2x.png" />
            
            </figure><p>Consider this scenario: you want to let your users upload their favorite GIF to their profile without having to store these GIFs as binaries in your database or managing <b>another</b> cloud storage bucket.</p><p>Workers KV is a great fit for this use case! You can create a Workers KV namespace for your users’ GIFs that is fast and reliable wherever your customers are located.</p><p>In this example, users upload a link to their favorite GIF, then a Worker downloads it and stores it to Workers KV.</p>
            <pre><code>addEventListener("fetch", event =&gt; {
  var url = event.request.url
  var arg = request.url.split("/").pop()
  // User sends a URI encoded link to the GIF they wish to upload.
  // (eg. example.com/api/upload_gif/&lt;encoded-uri&gt;)
  if (url.pathname.startsWith("/api/upload_gif")) {
    event.respondWith(uploadGif(arg))
    // Profile contains link to view the GIF.
    // (eg. example.com/api/view_gif/&lt;username&gt;)
  } else if (url.pathname.startsWith("/api/view_gif")) {
    event.respondWith(getGif(arg))
  }
})

async function uploadGif(url) {
  // Fetch the GIF from the Internet.
  var gif = await fetch(decodeURIComponent(url))
  var buffer = await gif.arrayBuffer()
  // Upload the GIF as a buffer to Workers KV.
  await GIFS.put(user.name, buffer)
  return gif
}

async function getGif(username) {
  var gif = await GIFS.get(username, "arrayBuffer")
  // If the user has set one, respond with the GIF.
  if (gif) {
    return new Response(gif, {headers: {"Content-Type": "image/gif"}})
  } else {
    return new Response({status: 404, statusText: "User has no GIF!"})
  }
}</code></pre>
            <p>Lastly, we want to thank all of our beta customers. It was your valuable feedback that led us to develop these changes to Workers KV. Make sure to stay in touch with us, we're always looking ahead for what's next and we love hearing from you!</p>
    <div>
      <h3>Pricing</h3>
      <a href="#pricing">
        
      </a>
    </div>
    <p>We’re also ready to announce our GA pricing. If you're one of our Enterprise customers, your pricing obviously remains unchanged.</p><ul><li><p>$0.50 / GB of data stored, 1 GB included</p></li><li><p>$0.50 / million reads, 10 million included</p></li><li><p>$5 / million write, list, and delete operations, 1 million included</p></li></ul><p>During the beta period, we learned customers don't want to just read values at our edge, they want to write values from our edge too. Since there is high demand for these edge operations, which are more costly, we have started charging non-read operations per month.</p>
    <div>
      <h3>Limits</h3>
      <a href="#limits">
        
      </a>
    </div>
    <p>As mentioned earlier, we increased our value size limit from <code>64 kB</code> to <code>2 MB</code>. We've also removed our cap on the number of keys per namespace — it's now unlimited. Here are our GA limits:</p><ul><li><p>Up to 20 namespaces per account, each with unlimited keys</p></li><li><p>Keys of up to 512 bytes and values of up to 2 MB</p></li><li><p>Unlimited writes per second for different keys</p></li><li><p>One write per second for the same key</p></li><li><p>Unlimited reads per second per key</p></li></ul>
    <div>
      <h3>Try it out now!</h3>
      <a href="#try-it-out-now">
        
      </a>
    </div>
    <p>Now open to all customers, you can start using <a href="https://www.cloudflare.com/products/workers-kv/">Workers KV</a> today from your Cloudflare dashboard under the Workers tab. You can also look at our updated <a href="https://developers.cloudflare.com/workers/kv/">documentation.</a></p><p>We're really excited to see what you all can build with Workers KV!</p> ]]></content:encoded>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Cloudflare Workers KV]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Programming]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">1btTnYVZRP9LCWLguf60eI</guid>
            <dc:creator>Ashcon Partovi</dc:creator>
        </item>
        <item>
            <title><![CDATA[The Serverlist: A big week of Rust with WASM, cloud cost hacking, and more]]></title>
            <link>https://blog.cloudflare.com/serverlist-4th-edition/</link>
            <pubDate>Fri, 26 Apr 2019 19:04:21 GMT</pubDate>
            <description><![CDATA[ Check out our 4th edition of The Serverlist below. Get the latest scoop on the serverless space, get your hands dirty with new developer tutorials, engage in conversations with other serverless developers, and find upcoming meetups and conferences to attend. ]]></description>
            <content:encoded><![CDATA[ <p>Check out our fourth edition of The Serverlist below. Get the latest scoop on the serverless space, get your hands dirty with new developer tutorials, engage in conversations with other serverless developers, and find upcoming meetups and conferences to attend.</p><p>Sign up below to have The Serverlist sent directly to your mailbox.</p>


 ]]></content:encoded>
            <category><![CDATA[The Serverlist Newsletter]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Cloudflare Workers KV]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">2F2bXmmfCjJbTOl5G50Ryx</guid>
            <dc:creator>Connor Peshek</dc:creator>
            <dc:creator>Andrew Fitch</dc:creator>
        </item>
        <item>
            <title><![CDATA[Rapid Development of Serverless Chatbots with Cloudflare Workers and Workers KV]]></title>
            <link>https://blog.cloudflare.com/rapid-development-of-serverless-chatbots-with-cloudflare-workers-and-workers-kv/</link>
            <pubDate>Thu, 25 Apr 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ As a fast-growing engineering organization, ownership of services changes fairly frequently. Many cycles get burned in chat with questions like "Who owns service x now? ]]></description>
            <content:encoded><![CDATA[ <p></p><p>I'm the Product Manager for the Application Services team here at Cloudflare. We recently identified a need for a new tool around service ownership. As a fast-growing engineering organization, ownership of services changes fairly frequently. Many cycles get burned in chat with questions like "Who owns service x now?</p><p>Whilst it's easy to see how a tool like this saves a few seconds per day for the asker and askee, and saves on some mental context switches, the time saved is unlikely to add up to the cost of development and maintenance.</p>
            <pre><code>= 5 minutes per day
x 260 work days 
= 1300 mins 
/ 60 mins 
= 20 person hours per year</code></pre>
            <p>So a 20-hour investment in that tool would pay itself back in a year valuing everyone's time the same. While we've made great strides in improving the efficiency of building tools at Cloudflare, 20 hours is a stretch for an end-to-end build, deploy and operation of a new tool.</p>
    <div>
      <h3>Enter Cloudflare Workers + Workers KV</h3>
      <a href="#enter-cloudflare-workers-workers-kv">
        
      </a>
    </div>
    <p>The more I use Serverless and Workers, the more I'm struck with the benefits of:</p>
    <div>
      <h4>1. Reduced operational overhead</h4>
      <a href="#1-reduced-operational-overhead">
        
      </a>
    </div>
    <p>When I upload a Worker, it's automatically distributed to 175+ data centers. I don't have to be worried about uptime - it will be up, and it will be fast.</p>
    <div>
      <h4>2. Reduced dev time</h4>
      <a href="#2-reduced-dev-time">
        
      </a>
    </div>
    <p>With operational overhead largely removed, I'm able to focus purely on code. A constrained problem space like this lends itself really well to Workers. I reckon we can knock this out in well under 20 hours.</p>
    <div>
      <h3>Requirements</h3>
      <a href="#requirements">
        
      </a>
    </div>
    <p>At Cloudflare, people ask these questions in Chat, so that's a natural interface to service ownership. Here's the spec:</p><table><tr><td><p><b>Use Case</b></p></td><td><p><b>Input</b></p></td><td><p><b>Output</b></p></td></tr><tr><td><p>Add</p></td><td><p>@ownerbot add Jira IT <a href="http://web.archive.org/web/20190624175546/http://chat.google.com/room/ABC123">http://chat.google.com/room/ABC123</a></p></td><td><p>Service added</p></td></tr><tr><td><p>Delete</p></td><td><p>@ownerbot delete Jira</p></td><td><p>Service deleted</p></td></tr><tr><td><p>Question</p></td><td><p>@ownerbot Kibana</p></td><td><p>SRE Core owns Kibana. The room is: <a href="http://web.archive.org/web/20190624175546/http://chat.google.com/ABC123">http://chat.google.com/ABC123</a></p></td></tr><tr><td><p>Export</p></td><td><p>@ownerbot export</p></td><td><p><code>[{name: "Kibana", owner: "SRE Core"...}]</code></p></td></tr></table>
    <div>
      <h3>Hello @ownerbot</h3>
      <a href="#hello-ownerbot">
        
      </a>
    </div>
    <p>Following the <a href="https://developers.google.com/hangouts/chat/how-tos/bots-develop">Hangouts Chat API Guide</a>, let's start with a hello world bot.</p><ol><li><p>To configure the bot, go to the <a href="https://developers.google.com/hangouts/chat/how-tos/bots-publish">Publish</a> page and scroll down to the <b>Enable The API</b> button:</p></li><li><p>Enter the bot name</p></li><li><p>Download the private key JSON file</p></li><li><p>Go to the <a href="https://console.developers.google.com/">API Console</a></p></li><li><p>Search for the <b>Hangouts Chat API</b> (<i>Note: not the Google+ Hangouts API</i>)
</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7mQUzoV7nOtkEjVc5IOxnf/5981969de5b73cec65d8146eeec3b383/api-console-hangouts-chat-api-1.png" />
            
            </figure></li><li><p>Click <b>Configuration</b> on the left menu</p></li><li><p>Fill out the form as per below <a href="#fn1">[1]</a></p><ul><li><p>Use a hard to guess URL. I <a href="https://www.guidgenerator.com/online-guid-generator.aspx">generate a guide</a> and use that in the URL.</p></li><li><p>The URL will be the route you associate with your Worker in the Dashboard
</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6LfF9FngUCK4THzSlrnFho/ecfee7e37965cdfb841e4f1a304959cd/bot-configuration-1.png" />
            
            </figure></li></ul></li><li><p>Click Save</p></li></ol><p>So Google Chat should know about our bot now. Back in Google Chat, click in the "Find people, rooms, bots" text box and choose "Message a Bot". Your bot should show up in the search:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6En5Ilq95DEcC1hQEfpCO/3c9e5a928f68f279d96e16bc62934b3e/message-a-bot.png" />
            
            </figure><p>It won't be too useful just yet, as we need to create our Worker to receive the messages and respond!</p>
    <div>
      <h3>The Worker</h3>
      <a href="#the-worker">
        
      </a>
    </div>
    <p>In the Workers dashboard, create a script and associate with the route you defined in step #7 (the one with the guide). It should look something like below. <a href="#fn2">[2]</a></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2pjvx9aep146V4aVSCBjvV/41307cdb04256dea6036ff1c6fab1902/route.png" />
            
            </figure><p>The Google Chatbot interface is pretty simple, but weirdly obfuscated in the Hangouts API guide IMHO. You have to reverse engineer the python example.</p><p>Basically, if we message our bot like <code>@ownerbot-blog Kibana</code>, we'll get a message like this:</p>
            <pre><code>  {
    "type": "MESSAGE",
    "message": {
      "argumentText": "Kibana"
    }
  }</code></pre>
            <p>To respond, we need to respond with <code>200 OK</code> and JSON body like this:</p>
            <pre><code>content-length: 27
content-type: application/json

{"text":"Hello chat world"}</code></pre>
            <p>So, the minimum Chatbot Worker looks something like this:</p>
            <pre><code>addEventListener('fetch', event =&gt; { event.respondWith(process(event.request)) });

function process(request) {
  let body = {
	text: "Hello chat world"
  }
  return new Response(JSON.stringify(body), {
    status: 200,
    headers: {
        "Content-Type": "application/json",
        "Cache-Control": "no-cache"
    }
  });
}</code></pre>
            <p>Save and deploy that, and we should be able to message our bot:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4BL2D9sXew5rPpVLtif5bS/d75d9c07bbb51ff94de92115e34a8d71/google-chatbot-hello-world-response.png" />
            
            </figure><p><b>Success</b>!</p>
    <div>
      <h3>Implementation</h3>
      <a href="#implementation">
        
      </a>
    </div>
    <p>OK, on to the meat of the code. Based on the requirements, I see a need for an <code>AddCommand</code>, <code>QueryCommand</code>, <code>DeleteCommand</code> and <code>HelpCommand</code>. I also see some sort of <code>ServiceDirectory</code> that knows how to add, delete and retrieve services.</p><p>I created a CommandFactory which accepts a ServiceDirectory, as well as an implementation of a KV store, which will be Workers KV in production, but I'll mock out in tests.</p>
            <pre><code>class CommandFactory {
    constructor(serviceDirectory, kv) {
        this.serviceDirectory = serviceDirectory;
        this.kv = kv;
    }

    create(argumentText) {
        let parts = argumentText.split(' ');
        let primary = parts[0];       
        
        switch (primary) {
            case "add":
                return new AddCommand(argumentText, this.serviceDirectory, this.kv);
            case "delete":
                return new DeleteCommand(argumentText, this.serviceDirectory, this.kv);
            case "help":
                return new HelpCommand(argumentText, this.serviceDirectory, this.kv);
            default:
                return new QueryCommand(argumentText, this.serviceDirectory, this.kv);
        }
    }
}</code></pre>
            <p>OK, so if we receive a message like <code>@ownerbot add</code>, we'll interpret it as an <code>AddCommand</code>, but if it's not something we recognize, we'll assume it's a <code>QueryCommand</code> like <code>@ownerbot Kibana</code> which makes it easy to parse commands.</p><p>OK, our commands need a service directory, which will look something like this:</p>
            <pre><code>class ServiceDirectory {     
    get(serviceName) {...}
    async add(service) {...}
    async delete(serviceName) {...}
    find(serviceName) {...}
    getNames() {...}
}</code></pre>
            <p>Let's build some commands. Oh, and my chatbot is going to be Ultima IV themed, because... reasons.</p>
            <pre><code>class AddCommand extends Command {

    async respond() {
        let cmdParts = this.commandParts;
        if (cmdParts.length !== 6) {
            return new OwnerbotResponse("Adding a service requireth Name, Owner, Room Name and Google Chat Room Url.", false);
        }
        let name = this.commandParts[1];
        let owner = this.commandParts[2];
        let room = this.commandParts[3];
        let url = this.commandParts[4];
        let aliasesPart = this.commandParts[5];
        let aliases = aliasesPart.split(' ');
        let service = {
            name: name,
            owner: owner,
            room: room,
            url: url,
            aliases: aliases
        }
        await this.serviceDirectory.add(service);
        return new OwnerbotResponse(`My codex of knowledge has expanded to contain knowledge of ${name}. Congratulations virtuous Paladin.`);
    }
}</code></pre>
            <p>The nice thing about the <a href="https://en.wikipedia.org/wiki/Command_pattern">Command</a> pattern for chatbots, is you can encapsulate the logic of each command for testing, as well as compose series of commands together to test out conversations. Later, we could extend it to support undo. Let's test the <code>AddCommand</code></p>
            <pre><code>  it('requires all args', async function() {
            let addCmd = new AddCommand("add AdminPanel 'Internal Tools' 'Internal Tools'", dir, kv); //missing url            
            let res = await addCmd.respond();
            console.log(res.text);
            assert.equal(res.success, false, "Adding with missing args should fail");            
        });

        it('returns success for all args', async function() {
            let addCmd = new AddCommand("add AdminPanel 'Internal Tools' 'Internal Tools Room' 'http://chat.google.com/roomXYZ'", dir, kv);            
            let res = await addCmd.respond();
            console.debug(res.text);
            assert.equal(res.success, true, "Should have succeeded with all args");            
        });</code></pre>
            
            <pre><code>$ mocha -g "AddCommand"
  AddCommand
    add
      ✓ requires all args
      ✓ returns success for all args
  2 passing (19ms)</code></pre>
            <p>So far so good. But adding commands to our ownerbot isn't going to be so useful unless we can query them.</p>
            <pre><code>class QueryCommand extends Command {
    async respond() {
        let service = this.serviceDirectory.get(this.argumentText);
        if (service) {
            return new OwnerbotResponse(`${service.owner} owns ${service.name}. Seeketh thee room ${service.room} - ${service.url})`);
        }
        let serviceNames = this.serviceDirectory.getNames().join(", ");
        return new OwnerbotResponse(`I knoweth not of that service. Thou mightst asketh me of: ${serviceNames}`);
    }
}</code></pre>
            <p>Let's write a test that runs an <code>AddCommand</code> followed by a <code>QueryCommand</code></p>
            <pre><code>describe ('QueryCommand', function() {
    let kv = new MockKeyValueStore();
    let dir = new ServiceDirectory(kv);
    await dir.init();

    it('Returns added services', async function() {    
        let addCmd = new AddCommand("add AdminPanel 'Internal Tools' 'Internal Tools Room' url 'alias' abc123", dir, kv);            
        await addCmd.respond();

        let queryCmd = new QueryCommand("AdminPanel", dir, kv);
        let res = await queryCmd.respond();
        assert.equal(res.success, true, "Should have succeeded");
        assert(res.text.indexOf('Internal Tools') &gt; -1, "Should have returned the team name in the query response");
    })
})</code></pre>
            
    <div>
      <h3>Demo</h3>
      <a href="#demo">
        
      </a>
    </div>
    <p>A lot of the code as been elided for brevity, but you can view the <a href="https://github.com/stevenpack/ownerbot">full source on GitHub</a>. Let's take it for a spin!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/UJpvFMBP0gI5gx6ggEadY/303538f35b351adb396c9f3a0da38c94/ownerbot1-1.gif" />
            
            </figure>
    <div>
      <h3>Learnings</h3>
      <a href="#learnings">
        
      </a>
    </div>
    <p>Some of the things I learned during the development of @ownerbot were:</p><ul><li><p>Chatbots are an awesome use case for Serverless. You can deploy and not worry again about the infrastructure</p></li><li><p>Workers KV means extends the range of useful chatbots to include stateful bots like @ownerbot</p></li><li><p>The <code>Command</code> pattern provides a useful way to encapsulate the parsing and responding to commands in a chatbot.</p></li></ul><p>In <b>Part 2</b> we'll add authentication to ensure we're only responding to requests from our instance of Google Chat</p><ol><li><p>For simplicity, I'm going to use a static shared key, but Google have recently rolled out a more <a href="https://developers.google.com/hangouts/chat/how-tos/bots-develop?hl=en_US#verifying_bot_authenticity">secure method</a> for verifying the caller's authenticity, which we'll expand on in Part 2. <a href="#fnref1">↩︎</a></p></li><li><p>This UI is the multiscript version available to Enterprise customers. You can still implement the bot with a single Worker, you'll just need to recognize and route requests to your chatbot code. <a href="#fnref2">↩︎</a></p></li></ol><p></p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers KV]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">6pbqrsfFAJTY87DgBJAxT9</guid>
            <dc:creator>Steven Pack</dc:creator>
        </item>
    </channel>
</rss>