A static CT API log running on a traditional server using the Sunlight implementation.
Next, let’s look at how we can translate these components into ones suitable for deployment on Workers.
Making it work
Let’s start with the easy choices. The static CT monitoring APIs are designed to serve static, cacheable, compressible assets from object storage. The API should be highly available and have the capacity to serve any number of CT clients. The natural choice is Cloudflare R2, which provides globally consistent storage with capacity for large data volumes, customizability to configure caching and compression, and unbounded read operations.
\n \n \n
A static CT API log running on Workers using a preliminary version of the Azul implementation which ran into performance limitations.
The static CT submission APIs are where the real challenge lies. In particular, they allow CT clients to submit certificate chains to be incorporated into the append-only log. We used Workers as the frontend for the CT log application. Workers run in data centers close to the client, scaling on demand to handle request load, making them the ideal place to run the majority of the heavyweight request handling logic, including validating requests, checking the deduplication cache (discussed below), and submitting the entry to be sequenced.
The next question was where and how we’d run the backend to handle the CT log sequencing logic, which needs to be stateful and tightly coordinated. We chose Durable Objects (DOs), a special type of stateful Cloudflare Worker where each instance has persistent storage and a unique name which can be used to route requests to it from anywhere in the world. DOs are designed to scale effortlessly for applications that can be easily broken up into self-contained units that do not need a lot of coordination across units. For example, a chat application can use one DO to control each chat room. In our model, then, each CT log is controlled by a single DO. This architecture allows us to easily run multiple CT logs within a single Workers application, but as we’ll see, the limitations of individual single-threaded DOs can easily become a bottleneck. More on this later.
With the CT log backend as a Durable Object, several other components fell into place: Durable Objects’ strongly-consistent transactional storage neatly fit the requirements for the “lock backend” to persist the log’s latest checkpoint, and we can use an alarm to trigger the log sequencing every second. We can also use location hints to place CT logs in locations geographically close to clients for reduced latency, similar to Google’s Argon and Xenon logs.
The choice of datastore for the deduplication cache proved to be non-obvious. The cache is best-effort, and intended to avoid re-sequencing entries that are already present in the log. The cache key is computed by hashing certain fields of the add-[pre-]chain request, and the cache value consists of the entry’s index in the log and the timestamp at which it was sequenced. At current log submission rates, the deduplication cache could grow in excess of 50 GB for 6 months of log data. In the Sunlight implementation, the deduplication cache is implemented as a local SQLite database, where checks against it are tightly coupled with sequencing, which ensures that duplicates from in-flight requests are correctly accounted for. However, this architecture did not translate well to Cloudflare's architecture. The data size doesn’t comfortably fit within Durable Object Storage or single-database D1 limits, and it was too slow to directly read and write to remote storage from within the sequencing loop. Ultimately, we split the deduplication cache into two components: a local fixed-size in-memory cache for fast deduplication over short periods of time (on the order of minutes), and the other a long-term deduplication cache built on Cloudflare Workers KV a global, low-latency, eventually-consistent key-value store without storage limitations.
With this architecture, it was relatively straightforward to port the Go code to Rust, and to bring up a functional static CT log up on Workers. We’re done then, right? Not quite. Performance tests showed that the log was only capable of sequencing 20-30 new entries per second, well under the 70 per second target of existing logs. We could work around this by simply running more logs, but that puts strain on other parts of the CT ecosystem — namely on TLS clients and monitors, which need to keep state for each log. Additionally, the alarm used to trigger sequencing would often be delayed by multiple seconds, meaning that the log was failing to produce new tree heads at consistent intervals. Time to go back to the drawing board.
Making it fast
In the design thus far, we’re asking a single-threaded Durable Object instance to do a lot of multi-tasking. The DO processes incoming requests from the Frontend Worker to add entries to the sequencing pool, and must periodically sequence the pool and write state to the various storage backends. A log handling 100 requests per second needs to switch between 101 running tasks (the extra one for the sequencing), plus any async tasks like writing to remote storage — usually 10+ writes to object storage and one write to the long-term deduplication cache per sequenced entry. No wonder the sequencing task was getting delayed!
\n \n \n
A static CT API log running on Workers using the Azul implementation with batching to improve performance.
We were able to work around these issues by adding an additional layer of DOs between the Frontend Worker and the Sequencer, which we call Batchers. The Frontend Worker uses consistent hashing on the cache key to determine which of several Batchers to submit the entry to, and the Batcher helps to reduce the number of requests to the Sequencer by buffering requests and sending them together in batches. When the batch is sequenced, the Batcher distributes the responses back to the Frontend Workers that submitted the request. The Batcher also handles writing updates to the deduplication cache, further freeing up resources for the Sequencer.
By limiting the scope of the critical block of code that needed to be run synchronously in a single DO, and leaning on the strengths of DOs by scaling horizontally where the workload allows it, we were able to drastically improve application performance. With this new architecture, the CT log application can handle upwards of 500 requests per second to the submission APIs to add new log entries, while maintaining a consistent sequencing tempo to keep per-request latency low (typically 1-2 seconds).
One of the reasons I was excited to work on this project is that it gave me an opportunity to implement a Workers application in Rust, which I’d never done from scratch before. Not everything was smooth, but overall I would recommend the experience.
To be clear, these rough edges are expected! The Workers platform is continuously gaining new features, and it’s natural that the Rust bindings would fall behind. As more developers rely on (and contribute to, hint hint) the Rust bindings, the developer experience will continue to improve.
The WebPKI is constantly evolving and growing, and upcoming changes, in particular shorter certificate lifetimes and larger post-quantum certificates, are going to place significantly more load on the CT ecosystem.
The CA/Browser Forum defines a set of Baseline Requirements for publicly-trusted TLS server certificates. As of 2020, the maximum certificate lifetime for publicly-trusted certificates is 398 days. However, there is a ballot measure to reduce that period to as low as 47 days by March 2029. Let’s Encrypt is going even further, and at the end of 2024 announced that they will be offering short-lived certificates with a lifetime of only six days by the end of 2025. Based on some back-of-the-envelope calculations using statistics from Merkle Town, these changes could increase the number of logged entries in the CT ecosystem by 16-20x.
If you’ve been keeping up with this blog, you’ll also know that post-quantum certificates are on the horizon, bringing with them larger signature and public key sizes. Today, a certificate with an P-256 ECDSA public key and issuer signature can be less than 1kB. Dropping in a ML-DSA44 public key and signature brings the same certificate size to 4.6 kB, assuming the SCTs use 96-byte UOVls-pkc signatures. With these choices, post-quantum certificates could require CT logs to store 4x the amount of data per log entry.
The static CT API design helps to ensure that CT logs are much better equipped to handle this increased load, especially if the load is distributed across multiple logs per operator. Our new implementation makes it easy for log operators to run CT logs on top of Cloudflare’s infrastructure, adding more operational diversity and robustness to the CT ecosystem. We welcome feedback on the design and implementation as GitHub issues, and encourage CAs and other interested parties to start submitting to and consuming from our test logs.
"],"published_at":[0,"2025-04-11T14:00+01:00"],"updated_at":[0,"2025-04-11T13:00:03.874Z"],"feature_image":[0,"https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4YQid9Rnb8i1HM0KZ5TzUG/36ba8b8fe3df9e8343e68ec29ab0b505/image3.png"],"tags":[1,[[0,{"id":[0,"2xCnBweKwOI3VXdYsGVbMe"],"name":[0,"Developer Week"],"slug":[0,"developer-week"]}],[0,{"id":[0,"1x7tpPmKIUCt19EDgM1Tsl"],"name":[0,"Research"],"slug":[0,"research"]}],[0,{"id":[0,"3txfsA7N73yBL9g3VPBLL0"],"name":[0,"Open Source"],"slug":[0,"open-source"]}],[0,{"id":[0,"w4e8pkoz9c8xNDVhy9eNe"],"name":[0,"Rust"],"slug":[0,"rust"]}],[0,{"id":[0,"6hbkItfupogJP3aRDAq6v8"],"name":[0,"Cloudflare Workers"],"slug":[0,"workers"]}],[0,{"id":[0,"1KPfUVOpIs5y5iIl16r9Tq"],"name":[0,"Transparency"],"slug":[0,"transparency"]}]]],"relatedTags":[0],"authors":[1,[[0,{"name":[0,"Luke Valenta"],"slug":[0,"luke"],"bio":[0,null],"profile_image":[0,"https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2FfvsynZPyK2w2lJSIXdqr/f8758aa6638121491d233932a076770e/luke.jpg"],"location":[0,"Austin, TX"],"website":[0,"https://lukevalenta.com/"],"twitter":[0,"@lukevalenta"],"facebook":[0,null]}]]],"meta_description":[0,"Learn about recent developments in Certificate Transparency (CT), and how we built a next-generation CT log on top of Cloudflare's Developer Platform."],"primary_author":[0,{}],"localeList":[0,{"name":[0,"blog-english-only"],"enUS":[0,"English for Locale"],"zhCN":[0,"No Page for Locale"],"zhHansCN":[0,"No Page for Locale"],"zhTW":[0,"No Page for Locale"],"frFR":[0,"No Page for Locale"],"deDE":[0,"No Page for Locale"],"itIT":[0,"No Page for Locale"],"jaJP":[0,"No Page for Locale"],"koKR":[0,"No Page for Locale"],"ptBR":[0,"No Page for Locale"],"esLA":[0,"No Page for Locale"],"esES":[0,"No Page for Locale"],"enAU":[0,"No Page for Locale"],"enCA":[0,"No Page for Locale"],"enIN":[0,"No Page for Locale"],"enGB":[0,"No Page for Locale"],"idID":[0,"No Page for Locale"],"ruRU":[0,"No Page for Locale"],"svSE":[0,"No Page for Locale"],"viVN":[0,"No Page for Locale"],"plPL":[0,"No Page for Locale"],"arAR":[0,"No Page for Locale"],"nlNL":[0,"No Page for Locale"],"thTH":[0,"No Page for Locale"],"trTR":[0,"No Page for Locale"],"heIL":[0,"No Page for Locale"],"lvLV":[0,"No Page for Locale"],"etEE":[0,"No Page for Locale"],"ltLT":[0,"No Page for Locale"]}],"url":[0,"https://blog.cloudflare.com/azul-certificate-transparency-log"],"metadata":[0,{"title":[0,"A next-generation Certificate Transparency log built on Cloudflare Workers"],"description":[0,"Learn about recent developments in Certificate Transparency (CT), and how we built a next-generation CT log on top of Cloudflare's Developer Platform."],"imgPreview":[0,"https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4Tp8IEv3EIwdA2R4xStWMM/75db25a6ecacd9a7dc1ac2675bf11d4d/OG_Share_2024__47_.png"]}]}],[0,{"id":[0,"18znr6c8JHaWhYT3czW9hw"],"title":[0,"Skip the setup: deploy a Workers application in seconds"],"slug":[0,"deploy-workers-applications-in-seconds"],"excerpt":[0,"You can now add a Deploy to Cloudflare button to your repository’s README when building a Workers application, making it simple for other developers to set up and deploy your project! "],"featured":[0,false],"html":[0,"
You can now add a Deploy to Cloudflare button to the README of your Git repository containing a Workers application — making it simple for other developers to quickly set up and deploy your project!
\n
The Deploy to Cloudflare button:
Creates a new Git repository on your GitHub/ GitLab account: Cloudflare will automatically clone and create a new repository on your account, so you can continue developing.
Automatically provisions resources the app needs: If your repository requires Cloudflare primitives like a Workers KV namespace, a D1 database, or an R2 bucket, Cloudflare will automatically provision them on your account and bind them to your Worker upon deployment.
Configures Workers Builds (CI/CD): Every new push to your production branch on your newly created repository will automatically build and deploy courtesy of Workers Builds.
There is nothing more frustrating than struggling to kick the tires on a new project because you don’t know where to start. Over the past couple of months, we’ve launched some improvements to getting started on Workers, including a gallery of Git-connected templates that help you kickstart your development journey.
But we think there’s another part of the story. Everyday, we see new Workers applications being built and open-sourced by developers in the community, ranging from starter projects to mission critical applications. These projects are designed to be shared, deployed, customized, and contributed to. But first and foremost, they must be simple to deploy.
If you’ve open-sourced a new Workers application before, you may have listed in your README the following in order to get others going with your repository:
“Clone this repo”
“Install these packages”
“Install Wrangler”
“Create this database”
“Paste the database ID back into your config file”
“Run this command to deploy”
“Push to a new Git repo”
“Set up CI”
And the list goes on the more complicated your application gets, deterring other developers and making your project feel intimidating to deploy. Now, your project can be up and running in one shot — which means more traction, more feedback, and more contributions.
We’re not just talking about building and sharing small starter apps but also complex pieces of software. If you’ve ever self-hosted your own instance of an application on a traditional cloud provider before, you’re likely familiar with the pain of tedious setup, operational overhead, or hidden costs of your infrastructure.
Self-hosting with traditional cloud provider
Self-hosting with Cloudflare
Setup a VPC
Install tools and dependencies
Set up and provision storage
Manually configure CI/CD pipeline to automate deployments
Scramble to manually secure your environment if a runtime vulnerability is discovered
Configure autoscaling policies and manage idle servers
✅Serverless
✅Highly-available global network
✅Automatic provisioning of datastores like D1 databases and R2 buckets
✅Built-in CI/CD workflow configured out of the box
✅Automatic runtime updates to keep your environment secure
✅Scale automatically and only pay for what you use.
By making your open-source repository accessible with a Deploy to Cloudflare button, you can allow other developers to deploy their own instance of your app without requiring deep infrastructure expertise.
We’re inviting all Workers developers looking to open-source their project to add Deploy to Cloudflare buttons to their projects and help others get up and running faster. We’ve already started working with open-source app developers! Here are a few great examples to explore:
Fiberplane helps developers build, test and explore Hono APIs and AI Agents in an embeddable playground. This Developer Week, Fiberplane released a set of sample Worker applications built on the ‘HONC' stack — Hono, Drizzle ORM, D1 Database, and Cloudflare Workers — that you can use as the foundation for your own projects. With an easy one-click Deploy to Cloudflare, each application comes preconfigured with the open source Fiberplane API Playground, making it easy to generate OpenAPI docs, test your handlers, and explore your API, all within one embedded interface.
You can now build and deploy remote Model Context Protocol (MCP) servers on Cloudflare Workers! MCP servers provide a standardized way for AI agents to interact with services directly, enabling them to complete actions on users' behalf. Cloudflare's remote MCP server implementation supports authentication, allowing users to login to their service from the agent to give it scoped permissions. This gives users the ability to interact with services without navigating dashboards or learning APIs — they simply tell their AI agent what they want to accomplish.
AI agents are intelligent systems capable of autonomously executing tasks by making real-time decisions about which tools to use and how to structure their workflows. Unlike traditional automation (which follows rigid, predefined steps), agents dynamically adapt their strategies based on context and evolving inputs. This template serves as a starting point for building AI-driven chat agents on Cloudflare's Agent platform. Powered by Cloudflare’s Agents SDK, it provides a solid foundation for creating interactive AI chat experiences with a modern UI and tool integrations capabilities.
Be sure to make your Git repository public and add the following snippet including your Git repository URL.
\n
[](https://deploy.workers.cloudflare.com/?url=<YOUR_GIT_REPO_URL>)
\n
When another developer clicks your Deploy to Cloudflare button, Cloudflare will parse the Wrangler configuration file, provision any resources detected, and create a new repo on their account that’s updated with information about newly created resources. For example:
\n
{\n "compatibility_date": "2024-04-03",\n\n "d1_databases": [\n {\n "binding": "MY_D1_DATABASE",\n\n\t//will be updated with newly created database ID\n "database_id": "1234567890abcdef1234567890abcdef"\n }\n ]\n}
\n
Check out our documentation for more information on how to set up a deploy button for your application and best practices to ensure a successful deployment for other developers.
For new Cloudflare developers, keep an eye out for “Deploy to Cloudflare” buttons across the web, or simply paste the URL of any public GitHub or GitLab repository containing a Workers application into the Cloudflare dashboard to get started.
\n \n \n
During Developer Week, tune in to our blog as we unveil new features and announcements — many including Deploy to Cloudflare buttons — so you can jump right in and start building!
"],"published_at":[0,"2025-04-08T14:00+01:00"],"updated_at":[0,"2025-04-08T13:00:02.729Z"],"feature_image":[0,"https://cf-assets.www.cloudflare.com/zkvhlag99gkb/38pbnujhmJ8qz7MTyQ5B6V/3ceafd1241de33f3a61bc2900be4c5b9/image1.png"],"tags":[1,[[0,{"id":[0,"2xCnBweKwOI3VXdYsGVbMe"],"name":[0,"Developer Week"],"slug":[0,"developer-week"]}],[0,{"id":[0,"6hbkItfupogJP3aRDAq6v8"],"name":[0,"Cloudflare Workers"],"slug":[0,"workers"]}],[0,{"id":[0,"4HIPcb68qM0e26fIxyfzwQ"],"name":[0,"Developers"],"slug":[0,"developers"]}],[0,{"id":[0,"3txfsA7N73yBL9g3VPBLL0"],"name":[0,"Open Source"],"slug":[0,"open-source"]}]]],"relatedTags":[0],"authors":[1,[[0,{"name":[0,"Nevi Shah"],"slug":[0,"nevi"],"bio":[0,null],"profile_image":[0,"https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2WVp9J8BoRJaBMR7crkqWH/f7814ed0df05b50babb47c6ff5b936e5/nevi.png"],"location":[0,null],"website":[0,null],"twitter":[0,"@nevikashah"],"facebook":[0,null]}]]],"meta_description":[0,"You can now add a Deploy to Cloudflare button to your repository’s README when building a Workers application, making it simple for other developers to set up and deploy your project! "],"primary_author":[0,{}],"localeList":[0,{"name":[0,"blog-english-only"],"enUS":[0,"English for Locale"],"zhCN":[0,"No Page for Locale"],"zhHansCN":[0,"No Page for Locale"],"zhTW":[0,"No Page for Locale"],"frFR":[0,"No Page for Locale"],"deDE":[0,"No Page for Locale"],"itIT":[0,"No Page for Locale"],"jaJP":[0,"No Page for Locale"],"koKR":[0,"No Page for Locale"],"ptBR":[0,"No Page for Locale"],"esLA":[0,"No Page for Locale"],"esES":[0,"No Page for Locale"],"enAU":[0,"No Page for Locale"],"enCA":[0,"No Page for Locale"],"enIN":[0,"No Page for Locale"],"enGB":[0,"No Page for Locale"],"idID":[0,"No Page for Locale"],"ruRU":[0,"No Page for Locale"],"svSE":[0,"No Page for Locale"],"viVN":[0,"No Page for Locale"],"plPL":[0,"No Page for Locale"],"arAR":[0,"No Page for Locale"],"nlNL":[0,"No Page for Locale"],"thTH":[0,"No Page for Locale"],"trTR":[0,"No Page for Locale"],"heIL":[0,"No Page for Locale"],"lvLV":[0,"No Page for Locale"],"etEE":[0,"No Page for Locale"],"ltLT":[0,"No Page for Locale"]}],"url":[0,"https://blog.cloudflare.com/deploy-workers-applications-in-seconds"],"metadata":[0,{"title":[0,"Skip the setup: deploy a Workers application in seconds"],"description":[0,"You can now add a Deploy to Cloudflare button to your repository’s README when building a Workers application, making it simple for other developers to set up and deploy your project! "],"imgPreview":[0,"https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5apJrvxfcNveJr5PhvhOA1/bdc0287863c9baf2086ab9848aba1de3/Skip_the_setup-_deploy_a_Workers_application_in_seconds-OG.png"]}]}],[0,{"id":[0,"01zA7RtUKkhrUeINJ9AIS3"],"title":[0,"Open-sourcing OpenPubkey SSH (OPKSSH): integrating single sign-on with SSH"],"slug":[0,"open-sourcing-openpubkey-ssh-opkssh-integrating-single-sign-on-with-ssh"],"excerpt":[0,"OPKSSH (OpenPubkey SSH) is now open-sourced as part of the OpenPubkey project."],"featured":[0,false],"html":[0,"
OPKSSH makes it easy to SSH with single sign-on technologies like OpenID Connect, thereby removing the need to manually manage and configure SSH keys. It does this without adding a trusted party other than your identity provider (IdP).
A cornerstone of modern access control is single sign-on (SSO), where a user authenticates to an identity provider (IdP), and in response the IdP issues the user a token. The user can present this token to prove their identity, such as “Google says I am Alice”. SSO is the rare security technology that both increases convenience — users only need to sign in once to get access to many different systems — and increases security.
OpenID Connect (OIDC) is the main protocol used for SSO. As shown below, in OIDC the IdP, called an OpenID Provider (OP), issues the user an ID Token which contains identity claims about the user, such as “email is alice@example.com”. These claims are digitally signed by the OP, so anyone who receives the ID Token can check that it really was issued by the OP.
Unfortunately, while ID Tokens do include identity claims like name, organization, and email address, they do not include the user’s public key. This prevents them from being used to directly secure protocols like SSH or End-to-End Encrypted messaging.
Note that throughout this post we use the term OpenID Provider (OP) rather than IdP, as OP specifies the exact type of IdP we are using, i.e., an OpenID IdP. We use Google as an example OP, but OpenID Connect works with Google, Azure, Okta, etc.
\n \n \n
Shows a user Alice signing in to Google using OpenID Connect and receiving an ID Token
OpenPubkey, shown below, adds public keys to ID Tokens. This enables ID Tokens to be used like certificates, e.g. “Google says alice@example.com is using public key 0x123.” We call an ID token that contains a public key a PK Token. The beauty of OpenPubkey is that, unlike other approaches, OpenPubkey does not require any changes to existing SSO protocols and supports any OpenID Connect compliant OP.
\n \n \n
Shows a user Alice signing in to Google using OpenID Connect/OpenPubkey and then producing a PK Token\nWhile OpenPubkey enables ID Tokens to be used as certificates, OPKSSH extends this functionality so that these ID Tokens can be used as SSH keys in the SSH protocol. This adds SSO authentication to SSH without requiring changes to the SSH protocol.
OPKSSH frees users and administrators from the need to manage long-lived SSH keys, making SSH more secure and more convenient.
“In many organizations – even very security-conscious organizations – there are many times more obsolete authorized keys than they have employees. Worse, authorized keys generally grant command-line shell access, which in itself is often considered privileged. We have found that in many organizations about 10% of the authorized keys grant root or administrator access. SSH keys never expire.” \n- Challenges in Managing SSH Keys – and a Call for Solutions by Tatu Ylonen (Inventor of SSH)
In SSH, users generate a long-lived SSH public key and SSH private key. To enable a user to access a server, the user or the administrator of that server configures that server to trust that user’s public key. Users must protect the file containing their SSH private key. If the user loses this file, they are locked out. If they copy their SSH private key to multiple computers or back up the key, they increase the risk that the key will be compromised. When a private key is compromised or a user no longer needs access, the user or administrator must remove that public key from any servers it currently trusts. All of these problems create headaches for users and administrators.
OPKSSH overcomes these issues:
Improved security: OPKSSH replaces long-lived SSH keys with ephemeral SSH keys that are created on-demand by OPKSSH and expire when they are no longer needed. This reduces the risk a private key is compromised, and limits the time period where an attacker can use a compromised private key. By default, these OPKSSH public keys expire every 24 hours, but the expiration policy can be set in a configuration file.
Improved usability: Creating an SSH key is as easy as signing in to an OP. This means that a user can SSH from any computer with opkssh installed, even if they haven’t copied their SSH private key to that computer.
To generate their SSH key, the user simply runs opkssh login, and they can use ssh as they typically do.
Improved visibility: OPKSSH moves SSH from authorization by public key to authorization by identity. If Alice wants to give Bob access to a server, she doesn’t need to ask for his public key, she can just add Bob’s email address bob@example.com to the OPKSSH authorized users file, and he can sign in. This makes tracking who has access much easier, since administrators can see the email addresses of the authorized users.
OPKSSH does not require any code changes to the SSH server or client. The only change needed to SSH on the SSH server is to add two lines to the SSH config file. For convenience, we provide an installation script that does this automatically, as seen in the video below.
Shows a user Alice SSHing into a server with her PK Token inside her SSH public key. The server then verifies her SSH public key using the OpenPubkey verifier.
Let’s look at an example of Alice (alice@example.com) using OPKSSH to SSH into a server:
Alice runs opkssh login. This command automatically generates an ephemeral public key and private key for Alice. Then it runs the OpenPubkey protocol by opening a browser window and having Alice log in through their SSO provider, e.g., Google.
If Alice SSOs successfully, OPKSSH will now have a PK Token that commits to Alice’s ephemeral public key and Alice’s identity. Essentially, this PK Token says “alice@example.com authenticated her identity and her public key is 0x123…”.
OPKSSH then saves to Alice’s .ssh directory:
an SSH public key file that contains Alice’s PK Token
and an SSH private key set to Alice’s ephemeral private key.
When Alice attempts to SSH into a server, the SSH client will find the SSH public key file containing the PK Token in Alice’s .ssh directory, and it will send it to the SSH server to authenticate.
The SSH server forwards the received SSH public key to the OpenPubkey verifier installed on the SSH server. This is because the SSH server has been configured to use the OpenPubkey verifier via the AuthorizedKeysCommand.
The OpenPubkey verifier receives the SSH public key file and extracts the PK Token from it. It then verifies that the PK Token is unexpired, valid, signed by the OP and that the public key in the PK Token matches the public key field in the SSH public key file. Finally, it extracts the email address from the PK Token and checks if alice@example.com is allowed to SSH into this server.
Consider the problems we face in getting OpenPubkey to work with SSH without requiring any changes to the SSH protocol or software:
How do we get the PK Token from the user’s machine to the SSH server inside the SSH protocol?\nWe use the fact that SSH public keys can be SSH certificates, and that SSH certificates have an extension field that allows arbitrary data to be included in the certificate. Thus, we package the PK Token into an SSH certificate extension so that the PK Token will be transmitted inside the SSH public key as a normal part of the SSH protocol. This enables us to send the PK Token to the SSH server as additional data in the SSH certificate, and allows OPKSSH to work without any changes to the SSH client.
How do we check that the PK Token is valid once it arrives at the SSH server?\nSSH servers support a configuration parameter called the AuthorizedKeysCommandthat allows us to use a custom program to determine if an SSH public key is authorized or not. Thus, we change the SSH server’s config file to use the OpenPubkey verifier instead of the SSH verifier by making the following two line change to sshd_config:
The OpenPubkey verifier will check that the PK Token is unexpired, valid and signed by the OP. It checks the user’s email address in the PK Token to determine if the user is authorized to access the server.
How do we ensure that the public key in the PK Token is actually the public key that secures the SSH session?\nThe OpenPubkey verifier also checks that the public key in the public key field in the SSH public key matches the user’s public key inside the PK Token. This works because the public key field in the SSH public key is the actual public key that secures the SSH session.
We have open sourced OPKSSH under the Apache 2.0 license, and released it as openpubkey/opkssh on GitHub. While the OpenPubkey project has had code for using SSH with OpenPubkey since the early days of the project, this code was intended as a prototype and was missing many important features. With OPKSSH, SSH support in OpenPubkey is no longer a prototype and is now a complete feature. Cloudflare is not endorsing OPKSSH, but simply donating code to OPKSSH.
OPKSSH provides the following improvements to OpenPubkey:
There are a number of ways to get involved in OpenPubkey or OPKSSH. The project is organized through the OPKSSH GitHub. We are building an open and friendly community and welcome pull requests from anyone. If you are interested in contributing, see our contribution guide.
We run a community meeting every month which is open to everyone, and you can also find us over on the OpenSSF Slack in the #openpubkey channel.
"],"published_at":[0,"2025-03-25T13:00+00:00"],"updated_at":[0,"2025-03-26T14:28:34.974Z"],"feature_image":[0,"https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1SXiWOhmwfDs86m6u84i8l/7b6b0874f6f2f91964b87383349c7785/image2.png"],"tags":[1,[[0,{"id":[0,"3txfsA7N73yBL9g3VPBLL0"],"name":[0,"Open Source"],"slug":[0,"open-source"]}],[0,{"id":[0,"64Z8wlRoBi6qbWfgdpgCJl"],"name":[0,"SSH"],"slug":[0,"ssh"]}],[0,{"id":[0,"6qgGalxjft44m5oDkd3i1p"],"name":[0,"Single Sign On (SSO)"],"slug":[0,"sso"]}],[0,{"id":[0,"1QsJUMpv0QBSLiVZLLQJ3V"],"name":[0,"Cryptography"],"slug":[0,"cryptography"]}],[0,{"id":[0,"7FzaH9AEvtFLQN298eEwwU"],"name":[0,"Authentication"],"slug":[0,"authentication"]}],[0,{"id":[0,"1x7tpPmKIUCt19EDgM1Tsl"],"name":[0,"Research"],"slug":[0,"research"]}]]],"relatedTags":[0],"authors":[1,[[0,{"name":[0,"Ethan Heilman"],"slug":[0,"ethan-heilman"],"bio":[0],"profile_image":[0,"https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4O71DnT2dvNJsTmWv6PnQI/42a821c809de522be30aa15ea8477fc0/Ethan_Heilman.webp"],"location":[0],"website":[0],"twitter":[0],"facebook":[0]}]]],"meta_description":[0,"OPKSSH (OpenPubkey SSH) is now open-sourced as part of the OpenPubkey project. This enables users and organizations to configure SSH to work with single sign-on technologies like OpenID Connect, removing the need to manually manage & configure SSH keys without adding a trusted party other than your IdP."],"primary_author":[0,{}],"localeList":[0,{"name":[0,"blog-english-only"],"enUS":[0,"English for Locale"],"zhCN":[0,"No Page for Locale"],"zhHansCN":[0,"No Page for Locale"],"zhTW":[0,"No Page for Locale"],"frFR":[0,"No Page for Locale"],"deDE":[0,"No Page for Locale"],"itIT":[0,"No Page for Locale"],"jaJP":[0,"No Page for Locale"],"koKR":[0,"No Page for Locale"],"ptBR":[0,"No Page for Locale"],"esLA":[0,"No Page for Locale"],"esES":[0,"No Page for Locale"],"enAU":[0,"No Page for Locale"],"enCA":[0,"No Page for Locale"],"enIN":[0,"No Page for Locale"],"enGB":[0,"No Page for Locale"],"idID":[0,"No Page for Locale"],"ruRU":[0,"No Page for Locale"],"svSE":[0,"No Page for Locale"],"viVN":[0,"No Page for Locale"],"plPL":[0,"No Page for Locale"],"arAR":[0,"No Page for Locale"],"nlNL":[0,"No Page for Locale"],"thTH":[0,"No Page for Locale"],"trTR":[0,"No Page for Locale"],"heIL":[0,"No Page for Locale"],"lvLV":[0,"No Page for Locale"],"etEE":[0,"No Page for Locale"],"ltLT":[0,"No Page for Locale"]}],"url":[0,"https://blog.cloudflare.com/open-sourcing-openpubkey-ssh-opkssh-integrating-single-sign-on-with-ssh"],"metadata":[0,{"title":[0,"Open-sourcing OpenPubkey SSH (OPKSSH): integrating single sign-on with SSH"],"description":[0,"OPKSSH (OpenPubkey SSH) is now open-sourced as part of the OpenPubkey project. This enables users and organizations to configure SSH to work with single sign-on technologies like OpenID Connect, removing the need to manually manage & configure SSH keys without adding a trusted party other than your IdP."],"imgPreview":[0,"https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3YFOkinUKPO3rl18jJKJnb/6fc2b6a7b4d51985155e34b6bd1f56e2/Open-sourcing_OpenPubkey_SSH__OPKSSH_-_integrating_single_sign-on_with_SSH-OG.png"]}]}],[0,{"id":[0,"6HAo0CAvmODAhYHnIF5Hbr"],"title":[0,"Open source all the way down: Upgrading our developer documentation"],"slug":[0,"open-source-all-the-way-down-upgrading-our-developer-documentation"],"excerpt":[0,"At Cloudflare, we treat developer content like an open source product. This collaborative approach enables global contributions to enhance quality and relevance for a wide range of users. This year,"],"featured":[0,false],"html":[0,"
At Cloudflare, we treat developer content like a product, where we take the user and their feedback into consideration. We are constantly iterating, testing, analyzing, and refining content. Inspired by agile practices, treating developer content like an open source product means we approach our documentation the same way an open source software project is created and maintained. Open source documentation empowers the developer community because it allows anyone, anywhere, to contribute content. By making both the content and the framework of the documentation site publicly accessible, we provide developers with the opportunity to not only improve the material itself but also understand and engage with the processes that govern how the documentation is built, approved, and maintained. This transparency fosters collaboration, learning, and innovation, enabling developers to contribute their expertise and learn from others in a shared, open environment. We also provide feedback to other open source products and plugins, giving back to the same community that supports us.
\n
\n
Building the best open source documentation experience
Great documentation empowers users to be successful with a new product as quickly as possible, showing them how to use the product and describing its benefits. Relevant, timely, and accurate content can save frustration, time, and money. Open source documentation adds a few more benefits, including building inclusive and supportive communities that help reduce the learning curve. We love being open source!
While the Cloudflare content team has scaled to deliver documentation alongside product launches, the open source documentation site itself was not scaling well. developers.cloudflare.com had outgrown the workflow for contributors, plus we were missing out on all the neat stuff created by developers in the community.
Just like a software product evaluation, we reviewed our business needs. We asked ourselves if remaining open source was appropriate? Were there other tools we wanted to use? What benefits did we want to see in a year or in five years? Our biggest limitations in addition to the contributor workflow challenges seemed to be around scalability and high maintenance costs for user experience improvements.
After compiling our wishlist of new features to implement, we reaffirmed our commitment to open source. We valued the benefit of open source in both the content and the underlying framework of our documentation site. This commitment goes beyond technical considerations, because it's a fundamental aspect of our relationship with our community and our philosophy of transparency and collaboration. While the choice of an open source framework to build the site on might not be visible to many visitors, we recognized its significance for our community of developers and contributors. Our decision-making process was heavily influenced by two primary factors: first, whether the update would enhance the collaborative ecosystem, and second, how it would improve the overall documentation experience. This focus reflects that our open source principles, applied to both content and infrastructure, are essential for fostering innovation, ensuring quality through peer review, and building a more engaged and empowered user community.
\n
\n
Cloudflare developer documentation: A collaborative open source approach
Cloudflare’s developer documentation is open source on GitHub, with content supporting all of Cloudflare’s products. The underlying documentation engine has gone through a few iterations, with the first version of the site released in 2020. That first version provided dev-friendly features such as dark mode and proper code syntax.
In 2021, we introduced a new custom documentation engine, bringing significant improvements to the Cloudflare content experience. The benefits of the Gatsby to Hugo migration included:
Faster development flow: The development flow replicated production behavior, increasing iteration speed and confidence. Preview links via Cloudflare Pages were also introduced, so the content team and stakeholders could quickly review what content would look like in production.
Custom components: Introduced features like resources-by-selector which let us reference content throughout the repository and gave us the flexibility to expand checks and automations.
Structured changelog management: Implementation of structured YAML changelog entries which facilitated sharing with various platforms like RSS feeds, Developer Discord, and within the docs themselves.
Improved performance: Significant page load time improvements with the migration to HTML-first and almost instantaneous local builds.
These features were non-negotiable as part of our evaluation of whether to migrate. We knew that any update to the site had to maintain the functionality we’d established as core parts of the new experience.
\n
\n
2024 update: Say “hello, world!” to our new developer documentation, powered by Astro
After careful evaluation, we chose to migrate from Hugo to the Astro (and by extension, JavaScript) ecosystem. Astro fulfilled many items on our wishlist including:
Enhanced content organization: Improved tagging and better cross-referencing of related pages.
Extensibility: Support for user plugins like starlight-image-zoom for lightbox functionality.
Development experience: Type-checking at build time with astro check, along with syntax highlighting, Intellisense, diagnostic messages, and plugins for ESLint, Stylelint, and Prettier.
JavaScript/TypeScript support: Aligned the docs site framework with the preferred languages of many contributors, facilitating easier contribution.
CSS management: Introduction of Tailwind and scoped styles.
Starlight, Astro’s documentation theme, was a key factor in the decision. Its powerful component overrides and plugins system allowed us to leverage built-in components and base styling.
Content needed to be migrated quickly. With dozens of pull requests opened and merged each day, entering a code freeze for a week simply wasn’t feasible. This is where the nature of abstract syntax trees (ASTs) came into play, only parsing the structure of a Markdown document rather than details like whitespace or indentation that would make a regular expression approach tricky.
With Hugo in 2021, we configured code block functionality like titles or line highlights with front matter inside the code block.
When we migrated from Gatsby to Hugo in 2021, the pull request included 4,850 files and the migration took close to three weeks from planning to implementation. This time around, the migration was nearly twice as large, with 8,060 files changed. Our planning and migration took six weeks in total:
10 days: Evaluate platforms, vendors, and features
14 days: Migrate the components required by the documentation site
The migration resulted in removing a net -19,624 lines of code from our maintenance burden.
\n \n \n
While the number of files had grown substantially since our last major migration, our strategy was very similar to the 2021 migration. We used Markdown AST and astray, a utility to walk ASTs, created specifically for the previous migration!
A website migration like our move to Astro/Starlight is a complex process that requires time to plan, review, and coordinate, and our preparation paid off! Including our Cloudflare Community MVPs as part of the planning and review period proved incredibly helpful. They provided great guidance and feedback as we planned for the migration. We only needed one day of code freeze, and there were no rollbacks or major incidents. Visitors to the site never experienced downtime, and overall the migration was a major success.
During testing, we ran into several use cases that warranted using experimental Astro APIs. These APIs were always well documented, thanks to fantastic open source content from the Astro community. We were able to implement them quickly without impacting our release timeline.
We also ran into an edge case with build time performance due to the number of pages on our site (4000+). The Astro team was quick to triage the problem and begin investigation for a permanent fix. Their fast, helpful fixes made us truly grateful for the support from the Astro Discord server. A big thank you to the Astro/Starlight community!
Migrating developers.cloudflare.com to Astro/Starlight is just one example of the ways we prioritize world-class documentation and user experiences at Cloudflare. Our deep investment in documentation makes this a great place to work for technical writers, UX strategists, and many other content creators. Since adopting a content like a product strategy in 2021, we have evolved to better serve the open source community by focusing on inclusivity and transparency, which ultimately leads to happier Cloudflare users.
We invite everyone to connect with us and explore these exciting new updates. Feel free to reach out if you’d like to speak with someone on the content team or share feedback about our documentation. You can share your thoughts or submit a pull request directly on the cloudflare-docs repository in GitHub.
"],"published_at":[0,"2025-01-08T14:00+00:00"],"updated_at":[0,"2025-01-08T14:00:03.578Z"],"feature_image":[0,"https://cf-assets.www.cloudflare.com/zkvhlag99gkb/39T3oFp8K80C21v3tOQxsc/2006154cf15184a17e61165cbec58b3e/BLOG-2600_1.png"],"tags":[1,[[0,{"id":[0,"7nUqeGThZ2m4zUG1xv6ffg"],"name":[0,"Technical Writing"],"slug":[0,"technical-writing"]}],[0,{"id":[0,"3txfsA7N73yBL9g3VPBLL0"],"name":[0,"Open Source"],"slug":[0,"open-source"]}],[0,{"id":[0,"17eVIVTZv365SSCxzaDL9o"],"name":[0,"Developer Documentation"],"slug":[0,"developer-documentation"]}],[0,{"id":[0,"4HIPcb68qM0e26fIxyfzwQ"],"name":[0,"Developers"],"slug":[0,"developers"]}],[0,{"id":[0,"3JAY3z7p7An94s6ScuSQPf"],"name":[0,"Developer Platform"],"slug":[0,"developer-platform"]}]]],"relatedTags":[0],"authors":[1,[[0,{"name":[0,"Kim Jeske"],"slug":[0,"kim"],"bio":[0,null],"profile_image":[0,"https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1SDxpGoF91lM10f0XhL8O4/a908a8f914260396b646107c396b1de6/kim.png"],"location":[0,null],"website":[0,null],"twitter":[0,null],"facebook":[0,null]}],[0,{"name":[0,"Kian Newman-Hazel"],"slug":[0,"kian-newman-hazel"],"bio":[0],"profile_image":[0,"https://cf-assets.www.cloudflare.com/zkvhlag99gkb/48ksPIMXauCn5H9RdlYj3H/9f672f14dcdb2555f5a32aa73efb504c/IMG_8432.jpg"],"location":[0,"United Kingdom"],"website":[0],"twitter":[0],"facebook":[0]}],[0,{"name":[0,"Kody Jackson"],"slug":[0,"kody"],"bio":[0,null],"profile_image":[0,"https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1uXVtuGTFZLmrGCd37Yog8/e54bed777ce72671e6dab85692a8ecd7/kody.jpg"],"location":[0,null],"website":[0,null],"twitter":[0,null],"facebook":[0,null]}]]],"meta_description":[0,"At Cloudflare, we treat developer content like an open source product. This collaborative approach enables global contributions to enhance quality and relevance for a wide range of users. This year, we scaled our documentation site to better meet the needs of users by migrating to the Astro ecosystem."],"primary_author":[0,{}],"localeList":[0,{"name":[0,"blog-english-only"],"enUS":[0,"English for Locale"],"zhCN":[0,"No Page for Locale"],"zhHansCN":[0,"No Page for Locale"],"zhTW":[0,"No Page for Locale"],"frFR":[0,"No Page for Locale"],"deDE":[0,"No Page for Locale"],"itIT":[0,"No Page for Locale"],"jaJP":[0,"No Page for Locale"],"koKR":[0,"No Page for Locale"],"ptBR":[0,"No Page for Locale"],"esLA":[0,"No Page for Locale"],"esES":[0,"No Page for Locale"],"enAU":[0,"No Page for Locale"],"enCA":[0,"No Page for Locale"],"enIN":[0,"No Page for Locale"],"enGB":[0,"No Page for Locale"],"idID":[0,"No Page for Locale"],"ruRU":[0,"No Page for Locale"],"svSE":[0,"No Page for Locale"],"viVN":[0,"No Page for Locale"],"plPL":[0,"No Page for Locale"],"arAR":[0,"No Page for Locale"],"nlNL":[0,"No Page for Locale"],"thTH":[0,"No Page for Locale"],"trTR":[0,"No Page for Locale"],"heIL":[0,"No Page for Locale"],"lvLV":[0,"No Page for Locale"],"etEE":[0,"No Page for Locale"],"ltLT":[0,"No Page for Locale"]}],"url":[0,"https://blog.cloudflare.com/open-source-all-the-way-down-upgrading-our-developer-documentation"],"metadata":[0,{"title":[0,"Open source all the way down: Upgrading our developer documentation"],"description":[0,"At Cloudflare, we treat developer content like an open source product. This collaborative approach enables global contributions to enhance quality and relevance for a wide range of users. This year, we scaled our documentation site to better meet the needs of users by migrating to the Astro ecosystem."],"imgPreview":[0,"https://cf-assets.www.cloudflare.com/zkvhlag99gkb/unu6xNHqGPZdNnccHPWhz/b45b02ee55cd04ae92bb01324207cfce/BLOG-2600_OG.png"]}]}]]],"locale":[0,"ar-ar"],"translations":[0,{"posts.by":[0,"بقلم"],"footer.gdpr":[0,"اللائحة العامة لحماية البيانات (GDPR)"],"lang_blurb1":[0,"هذا المنشور متوفر أيضًا باللغة {lang1}."],"lang_blurb2":[0,"هذا المنشور متوفر أيضًا باللغتين {lang1} و{lang2}."],"lang_blurb3":[0,"هذا المنشور متوفر أيضًا باللغات {lang1} و{lang2} و{lang3}."],"footer.press":[0,"الصحافة"],"header.title":[0,"مدونة Cloudflare"],"search.clear":[0,"مسح"],"search.filter":[0,"عامل التصفية"],"search.source":[0,"المصدر"],"footer.careers":[0,"الوظائف"],"footer.company":[0,"الشركة"],"footer.support":[0,"الدعم"],"footer.the_net":[0,"theNet"],"search.filters":[0,"عوامل التصفية"],"footer.our_team":[0,"فريقنا"],"footer.webinars":[0,"الندوات عبر الإنترنت"],"page.more_posts":[0,"المزيد من المنشورات"],"posts.time_read":[0,"متوسط وقت القراءة {time} دقائق"],"search.language":[0,"اللغة"],"footer.community":[0,"المجتمع"],"footer.resources":[0,"الموارد"],"footer.solutions":[0,"حلول تطبيق"],"footer.trademark":[0,"علامة تجاريه"],"header.subscribe":[0,"الاشتراك"],"footer.compliance":[0,"الامتثال"],"footer.free_plans":[0,"خطط مجانية"],"footer.impact_ESG":[0,"الأثر/المسؤولية البيئية والاجتماعية والحوكمة"],"posts.follow_on_X":[0,"المتابعة على منصة X"],"footer.help_center":[0,"مركز المساعدة"],"footer.network_map":[0,"خريطة الشبكة"],"header.please_wait":[0,"يُرجى الانتظار"],"page.related_posts":[0,"المنشورات ذات الصلة"],"search.result_stat":[0,"نتائج {search_range} من {search_total} لـ {search_keyword}"],"footer.case_studies":[0,"دراسات حالة"],"footer.connect_2024":[0,"الاتصال 2024"],"footer.terms_of_use":[0,"شروط الاستخدام"],"footer.white_papers":[0,"الوثائق الفنية"],"footer.cloudflare_tv":[0,"خدمة Cloudflare TV"],"footer.community_hub":[0,"المركز المجتمعي"],"footer.compare_plans":[0,"مقارنة الخطط"],"footer.contact_sales":[0,"الاتصال بقسم المبيعات"],"header.contact_sales":[0,"الاتصال بقسم المبيعات"],"header.email_address":[0,"عنوان البريد الإلكتروني"],"page.error.not_found":[0,"لم يتم العثور على الصفحة"],"footer.developer_docs":[0,"وثائق المطورين"],"footer.privacy_policy":[0,"سياسة الخصوصية"],"footer.request_a_demo":[0,"اطلب عرضًا توضيحيًا"],"page.continue_reading":[0,"مواصلة القراءة"],"footer.analysts_report":[0,"تقارير المحللين"],"footer.for_enterprises":[0,"للشركات"],"footer.getting_started":[0,"بدء الاستخدام"],"footer.learning_center":[0,"مركز التعلّم"],"footer.project_galileo":[0,"مشروع جاليليو"],"pagination.newer_posts":[0,"أحدث المنشورات"],"pagination.older_posts":[0,"ترتيب المنشورات"],"posts.social_buttons.x":[0,"المناقشة على منصة X"],"search.icon_aria_label":[0,"بحث"],"search.source_location":[0,"المصدر/الموقع"],"footer.about_cloudflare":[0,"نبذة عن Cloudflare"],"footer.athenian_project":[0,"مشروع أثينيان"],"footer.become_a_partner":[0,"كن شريكًا"],"footer.cloudflare_radar":[0,"رادار Cloudflare"],"footer.network_services":[0,"خدمات الشبكات"],"footer.trust_and_safety":[0,"الثقة والسلامة"],"header.get_started_free":[0,"ابدأ الاستخدام مجانًا"],"page.search.placeholder":[0,"البحث في Cloudflare"],"footer.cloudflare_status":[0,"حالة Cloudflare"],"footer.cookie_preference":[0,"تفضيلات ملفات تعريف الارتباط"],"header.valid_email_error":[0,"يجب أن يكون البريد الإلكتروني صحيحًا."],"search.result_stat_empty":[0,"نتائج البحث تُظهر {search_range} من {search_total}"],"footer.connectivity_cloud":[0,"الاتصال السحابي"],"footer.developer_services":[0,"خدمات المطورين"],"footer.investor_relations":[0,"علاقات المستثمرين"],"page.not_found.error_code":[0,"رمز الخطأ: 404"],"search.autocomplete_title":[0,"أَدْخِلْ استعلام. للإرسال، اضغط على زر \"enter\" (إدخال)"],"footer.logos_and_press_kit":[0,"مجموعة الشعارات والـمـواد الصحفية"],"footer.application_services":[0,"خدمات التطبيقات"],"footer.get_a_recommendation":[0,"احصل على توصية"],"posts.social_buttons.reddit":[0,"المناقشة على موقع Reddit"],"footer.sse_and_sase_services":[0,"خدمات SSE وSASE"],"page.not_found.outdated_link":[0,"ربما تكون قد استخدمت رابطًا قديمًا، أو ربما تكون قد كتبت العنوان على نحو خاطئ."],"footer.report_security_issues":[0,"الإبلاغ عن المشكلات الأمنية"],"page.error.error_message_page":[0,"عذرًا، لا يمكننا العثور على الصفحة التي تبحث عنها."],"header.subscribe_notifications":[0,"اشترك لتلقي إشعاراتٍ بالمنشورات الجديدة:"],"footer.cloudflare_for_campaigns":[0,"خدمة Cloudflare للحملات"],"header.subscription_confimation":[0,"تم تأكيد الاشتراك. شكرًا لك على الاشتراك!"],"posts.social_buttons.hackernews":[0,"المناقشة على موقع Hacker News"],"footer.diversity_equity_inclusion":[0,"التنوع والمساواة والشمول"],"footer.critical_infrastructure_defense_project":[0,"مشروع حماية البنية التحتية الحيوية"]}],"localesAvailable":[1,[[0,"en-us"],[0,"zh-cn"],[0,"zh-tw"],[0,"fr-fr"],[0,"de-de"],[0,"ja-jp"],[0,"ko-kr"],[0,"es-es"]]],"footerBlurb":[0,"نوفر الحماية لشبكات الشركة بالكامل، ونساعد العملاء على بناء التطبيقات على نطاق الإنترنت بفعالية، ونسرّع أي تطبيق يعمل على الموقع الإلكتروني أو عبر الإنترنت، ونصد الهجمات الموزعة لحجب الخدمة (DDoS)، ونمنع المخترِقين من الوصول إليك، ونساعدك في رحلتك إلى نموذج أمان Zero Trust.
بادر بزيارة 1.1.1.1 من أي جهاز للبدء في استخدام تطبيقنا المجاني الذي يجعل الإنترنت لديك أسرع وأكثر أماناً.
لمعرفة المزيد حول مهمتنا المتمثلة في المساعدة في بناء إنترنت أفضل، ابدأ من هنا. وإذا كنت تبحث عن مسار وظيفي جديد، فابحث في وظائفنا الشاغرة."]}" client="load" opts="{"name":"Post","value":true}" await-children="">
Expanding Cloudflare's support for open source projects with Project Alexandria
At Cloudflare, we believe in the power of open source. It’s more than just code, it’s the spirit of collaboration, innovation, and shared knowledge that drives the Internet forward. Open source is the foundation upon which the Internet thrives, allowing developers and creators from around the world to contribute to a greater whole.
But oftentimes, open source maintainers struggle with the costs associated with running their projects and providing access to users all over the world. We’ve had the privilege of supporting incredible open source projects such as Git and the Linux Foundation through our open source program and learned first-hand about the places where Cloudflare can help the most.
Today, we're introducing a streamlined and expanded open source program: Project Alexandria. The ancient city of Alexandria is known for hosting a prolific library and a lighthouse that was one of the Seven Wonders of the Ancient World. The Lighthouse of Alexandria served as a beacon of culture and community, welcoming people from afar into the city. We think Alexandria is a great metaphor for the role open source projects play as a beacon for developers around the world and a source of knowledge that is core to making a better Internet.
This project offers recurring annual credits to even more open source projects to provide our products for free. In the past, we offered an upgrade to our Pro plan, but now we’re offering upgrades tailored to the size and needs of each project, along with access to a broader range of products like Workers, Pages, and more. Our goal with Project Alexandria is to ensure every OSS project not only survives but thrives, with access to Cloudflare’s enhanced security, performance optimization, and developer tools — all at no cost.
Building a program based on your needs
We realize that open source projects have different needs. Some projects, like package repositories, may be most concerned about storage and transfer costs. Other projects need help protecting them from DDoS attacks. And some projects need a robust developer platform to enable them to quickly build and deploy scalable and secure applications.
With our new program we’ll work with your project to help unlock the following based on your needs:
An upgrade to a Cloudflare Pro, Business, or Enterprise plan, which will give you more flexibility with more Cloudflare Rules to manage traffic with, Image Optimization with Polish to accelerate the speed of image downloads, and enhanced security with Web Application Firewall (WAF), Security Analytics, and Page Shield, to protect projects from potential threats and vulnerabilities.
Increased requests to Cloudflare Workers and Pages, allowing you to handle more traffic and scale your applications globally.
Increased R2 storage for builds and artifacts, ensuring you have the space needed to store and access your project’s assets efficiently.
Enhanced Zero Trust access, including Remote Browser Isolation, no user limits, and extended activity log retention to give you deeper insights and more control over your project’s security.
Every open source project in the program will receive additional resources and support through a dedicated channel on our Discord server. And if there’s something you think we can do to help that we don’t currently offer, we’re here to figure out how to make it happen.
Many open source projects run within the limits of Cloudflare’s generous free tiers. Our mission to help build a better Internet means that cost should not be a barrier to creating, securing, and distributing your open source packages globally, no matter the size of the project. Indie or niche open source projects can still run for free without the need for credits. For larger open source projects, the annual recurring credits are available to you, so your money can continue to be reinvested into innovation, instead of paying for infrastructure to store, secure, and deliver your packages and websites.
We’re dedicated to supporting projects that are not only innovative but also crucial to the continued growth and health of the internet. The criteria for the program remain the same:
Operate solely on a non-profit basis and/or otherwise align with the project mission.
We’re incredibly lucky to have open source projects that we admire, and the incredible people behind those projects, as part of our program — including the OpenJS Foundation, OpenTofu, and JuliaLang.
OpenJS Foundation
Node.js has been part of our OSS Program since 2019, and we’ve recently partnered with the OpenJS Foundation to provide technical support and infrastructure improvements to other critical JavaScript projects hosted at the foundation, including Fastify, jQuery, Electron, and NativeScript.
One prominent example of the OpenJS Foundation using Cloudflare is the Node.js CDN Worker. It’s currently in active development by the Node.js Web Infrastructure and Build teams and aims to serve all Node.js release assets (binaries, documentations, etc.) provided on their website.
Aaron Snell explained that these release assets are currently being served by a single static origin file server fronted by Cloudflare. This worked fine up until a few years ago when issues began to pop up with new releases. With a new release came a cache purge, meaning that all the requests for the release assets were cache misses, causing Cloudflare to go forward directly to the static file server, overloading it. Because Node.js releases nightly builds, this issue occurs every day.
The CDN Worker plans to fix this by using Cloudflare Workers and R2 to serve requests for the release assets, taking all the load off the static file server, resulting in improved availability for Node.js downloads and documentation, and ultimately making the process more sustainable in the long run.
OpenTofu
OpenTofu has been focused on building a free and open alternative to proprietary infrastructure-as-code platforms. One of their major challenges has been ensuring the reliability and scalability of their registry while keeping costs low. Cloudflare's R2 storage and caching services provided the perfect fit, allowing OpenTofu to serve static files at scale without worrying about bandwidth or performance bottlenecks.
The OpenTofu team noted that it was paramount for OpenTofu to keep the costs of running the registry as low as possible both in terms of bandwidth and also in human cost. However, they also needed to make sure that the registry had an uptime close to 100% since thousands upon thousands of developers would be left without a means to update their infrastructure if it went down.
The registry codebase (written in Go) pre-generates all possible answers of the OpenTofu Registry API and uploads the static files to an R2 bucket. With R2, OpenTofu has been able to run the registry essentially for free with no servers and scaling issues to worry about.
JuliaLang
JuliaLang has recently joined our OSS Sponsorship Program, and we’re excited to support their critical infrastructure to ensure the smooth operation of their ecosystem. A key aspect of this support is enabling the use of Cloudflare’s services to help JuliaLang deliver packages to its user base.
According to Elliot Saba, JuliaLang had been using Amazon Lightsail as a cost-effective global CDN to serve packages to their user base. However, as their user base grew they would occasionally exceed their bandwidth limits and rack up serious cloud costs, not to mention experiencing degraded performance due to load balancer VMs getting overloaded by traffic spikes. Now JuliaLang is using Cloudflare R2, and the speed and reliability of R2 object storage has so far exceeded that of their own within-datacenter solutions, and the lack of bandwidth charges means JuliaLang is now getting faster, more reliable service for less than a tenth of their previous spend.
How can we help?
If your project fits our criteria, and you’re looking to reduce costs and eliminate surprise bills, we invite you to apply! We’re eager to help the next generation of open source projects make their mark on the internet.
For more details and to apply, visit our new Project Alexandria page. And if you know other projects that could benefit from this program, please spread the word!
Learn about recent developments in Certificate Transparency (CT), and how we built a next-generation CT log on top of Cloudflare's Developer Platform....
You can now add a Deploy to Cloudflare button to your repository’s README when building a Workers application, making it simple for other developers to set up and deploy your project! ...
At Cloudflare, we treat developer content like an open source product. This collaborative approach enables global contributions to enhance quality and relevance for a wide range of users. This year,...