
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Tue, 14 Apr 2026 15:11:43 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Bringing Your Own IPs to Cloudflare (BYOIP)]]></title>
            <link>https://blog.cloudflare.com/bringing-your-own-ips-to-cloudflare-byoip/</link>
            <pubDate>Thu, 30 Jul 2020 15:00:00 GMT</pubDate>
            <description><![CDATA[ Today we’re thrilled to announce general availability of Bring Your Own IP (BYOIP) across our Layer 7 products as well as Spectrum and Magic Transit services.  ]]></description>
            <content:encoded><![CDATA[ <p>Today we’re thrilled to announce general availability of Bring Your Own IP (BYOIP) across our Layer 7 products as well as <a href="https://www.cloudflare.com/products/cloudflare-spectrum/">Spectrum</a> and <a href="https://www.cloudflare.com/magic-transit/">Magic Transit</a> services. When BYOIP is configured, the Cloudflare edge will announce a customer’s own IP prefixes and the prefixes can be used with our Layer 7 services, Spectrum, or Magic Transit. If you’re not familiar with the term, an IP prefix is a range of IP addresses. Routers create a table of reachable prefixes, known as a routing table, to ensure that packets are delivered correctly across the Internet.</p><p>As part of this announcement, we are listing BYOIP on the relevant product <a href="https://www.cloudflare.com/cdn/">pages</a>, <a href="https://developers.cloudflare.com/byoip/">developer documentation</a>, and UI support for controlling your prefixes. Previous support was API only.</p><p>Customers choose BYOIP with Cloudflare for a number of reasons. It may be the case that your IP prefix is already allow-listed in many important places, and updating firewall rules to also allow Cloudflare address space may represent a large administrative hurdle. Additionally, you may have hundreds of thousands, or even millions, of end users pointed directly to your IPs via DNS, and it would be hugely time consuming to get them all to update their records to point to Cloudflare IPs.</p><p>Over the last several quarters we have been building tooling and processes to support customers bringing their own IPs at scale. At the time of writing this post we’ve successfully onboarded hundreds of customer IP prefixes. Of these, 84% have been for Magic Transit deployments, 14% for Layer 7 deployments, and 2% for Spectrum deployments.</p><p>When you BYOIP with Cloudflare, this means we announce your IP space in over 200 cities around the world and tie your IP prefix to the service (or services!) of your choosing. Your IP space will be protected and accelerated as if they were Cloudflare’s own IPs. We can support regional deployments for BYOIP prefixes as well if you have technical and/or legal requirements limiting where your prefixes can be announced, such as <a href="https://www.cloudflare.com/learning/privacy/what-is-data-sovereignty/">data sovereignty</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4gcn5knjumg5Lmd32FpGFj/3eae9cf0c1335bf60324dc2d88e86b8b/IP-at-the-edge_2x.png" />
            
            </figure><p>You can turn on advertisement of your IPs from the Cloudflare edge with a click of a button and be live across the world in a matter of minutes.</p><p>All BYOIP customers receive <a href="/announcing-network-analytics/">network analytics</a> on their prefixes. Additionally all IPs in BYOIP prefixes can be considered static IPs. There are also benefits specific to the service you use with your IP prefix on Cloudflare.</p>
    <div>
      <h4>Layer 7 + BYOIP:</h4>
      <a href="#layer-7-byoip">
        
      </a>
    </div>
    <p>Cloudflare has a robust Layer 7 product portfolio, including products like Bot Management, Rate Limiting, Web Application Firewall, and Content Delivery, to name just a few. You can choose to BYOIP with our Layer 7 products and receive all of their benefits on your IP addresses.</p><p>For Layer 7 services, we can support a variety of IP to domain mapping requests including sharing IPs between domains or putting domains on dedicated IPs, which can help meet requirements for things such as non-SNI support.</p><p>If you are also an SSL for SaaS customer, using BYOIP, you have increased flexibility to change IP address responses for <code>[custom_hostnames](https://developers.cloudflare.com/ssl/ssl-for-saas/status-codes/custom-hostnames/)</code> in the event an IP is unserviceable for some reason.</p>
    <div>
      <h4>Spectrum + BYOIP:</h4>
      <a href="#spectrum-byoip">
        
      </a>
    </div>
    <p>Spectrum is Cloudflare’s solution to protect and accelerate applications that run any UDP or TCP protocol. The Spectrum <a href="https://developers.cloudflare.com/spectrum/getting-started/byoip/">API</a> supports BYOIP today. Spectrum customers who use BYOIP can specify, through Spectrum’s API, which IPs they would like associated with a Spectrum application.</p>
    <div>
      <h4>Magic Transit + BYOIP:</h4>
      <a href="#magic-transit-byoip">
        
      </a>
    </div>
    <p>Magic Transit is a Layer 3 security service which processes all your network traffic by announcing your IP addresses and attracting that traffic to the Cloudflare edge for processing.  Magic Transit supports sophisticated packet filtering and firewall configurations. BYOIP is a requirement for using the Magic Transit service. As Magic Transit is an IP level service, Cloudflare must be able to announce your IPs in order to provide this service</p>
    <div>
      <h3>Bringing Your IPs to Cloudflare: What is Required?</h3>
      <a href="#bringing-your-ips-to-cloudflare-what-is-required">
        
      </a>
    </div>
    <p>Before Cloudflare can announce your prefix we require some documentation to get started. The first is something called a ‘Letter of Authorization’ (LOA), which details information about your prefix and how you want Cloudflare to announce it. We then share this document with our Tier 1 transit providers in advance of provisioning your prefix. This step is done to ensure that Tier 1s are aware we have authorization to announce your prefixes.</p><p>Secondly, we require that your Internet Routing Registry (IRR) records are up to date and reflect the data in the LOA. This typically means ensuring the entry in your regional registry is updated (i.e. ARIN, RIPE, APNIC).</p><p>Once the administrivia is out of the way, work with your account team to learn when your prefixes will be ready to announce.</p><p>We also encourage customers to use <a href="/tag/rpki/">RPKI</a> and can support this for customer prefixes. We have blogged and built extensive tooling to make adoption of this protocol easier. If you’re interested in BYOIP with RPKI support just let your account team know!</p>
    <div>
      <h3>Configuration</h3>
      <a href="#configuration">
        
      </a>
    </div>
    <p>Each customer prefix can be announced via the ‘dynamic advertisement’ toggle in either the UI or <a href="https://api.cloudflare.com/#ip-address-management-dynamic-advertisement-properties">API</a>, which will cause the Cloudflare edge to either announce or withdraw a prefix on your behalf. This can be done as soon as your account team lets you know your prefixes are ready to go.</p><p>Once the IPs are ready to be announced, you may want to set up ‘delegations’ for your prefixes. Delegations manage how the prefix can be used across multiple Cloudflare accounts and have slightly different implications depending on which service your prefix is bound to. A prefix is owned by a single account, but a delegation can extend some of the prefix functionality to other accounts. This is also captured on our developer docs. Today, delegations can affect Layer 7 and Spectrum BYOIP prefixes.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5gqLS8zRdrCwoB8nbDw2r0/a3292109c4ab4345964eb6775908ca59/delegation-BYOIP_2x.png" />
            
            </figure><p>Layer 7: If you use BYOIP + Layer 7 and also use the <a href="https://developers.cloudflare.com/ssl/ssl-for-saas">SSL for SaaS</a> service, a delegation to another account will allow that account to also use that prefix to validate custom hostnames in addition to the original account which owns the prefix. This means that multiple accounts can use the same IP prefix to serve up custom hostname traffic. Additionally, all of your IPs can serve traffic for custom hostnames, which means you can easily change IP addresses for these hostnames if an IP is blocked for any reason.</p><p>Spectrum: If you used BYOIP + Spectrum, via the <a href="https://developers.cloudflare.com/spectrum/getting-started/byoip/">Spectrum API</a>, you can specify which IP in your prefix you want to create a Spectrum app with. If you create a delegation for prefix to another account, that second account will also be able to specify an IP from that prefix to create an app.</p><p>If you are interested in learning more about BYOIP across either Magic Transit, CDN, or Spectrum, please reach out to your account team if you’re an existing customer or contact <a href="#">sales@cloudflare.com</a> if you’re a new prospect.</p> ]]></content:encoded>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Magic Transit]]></category>
            <category><![CDATA[Spectrum]]></category>
            <guid isPermaLink="false">6t1GxKr7I9LZIpahhFVJPQ</guid>
            <dc:creator>Tom Brightbill</dc:creator>
        </item>
        <item>
            <title><![CDATA[A Full CI/CD Pipeline for Workers with Travis CI]]></title>
            <link>https://blog.cloudflare.com/a-ci/</link>
            <pubDate>Fri, 22 Mar 2019 17:10:37 GMT</pubDate>
            <description><![CDATA[ In today’s post we’re going to talk about building a CI/CD pipeline for Cloudflare Worker’s using Travis CI. If you aren’t yet aware, Cloudflare Workers allow you to run Javascript in all 165 of our data centers, and they deploy globally in about 30 seconds. Learn more here.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>In today’s post we’re going to talk about building a <a href="https://www.cloudflare.com/learning/serverless/glossary/what-is-ci-cd/">CI/CD pipeline</a> for Cloudflare Worker’s using Travis CI. If you aren’t yet aware, Cloudflare Workers allow you to run Javascript in all 165 of our data centers, and they deploy globally in about 30 seconds. Learn more <a href="https://developers.cloudflare.com/workers/about/">here</a>.</p><p>There are a few steps before we get started. We need to have a Worker script we want to deploy, some optional unit tests for the script, a <code>serverless.yml</code>  file to deploy via the Serverless Framework, a <code>.gitignore</code> file to ignore the <code>node_modules</code> folder, and finally, a <code>.travis.yml</code> configuration file. All of these files will live in the same GitHub repository, which should have a final layout like:</p>
            <pre><code>----- worker.js
----- serverless.yml
----- test
      . worker-test.js
----- node_modules
----- package.json
----- package-lock.json
----- .travis.yml
----- .gitignore</code></pre>
            
    <div>
      <h3>The Worker Script</h3>
      <a href="#the-worker-script">
        
      </a>
    </div>
    <p><a href="/unit-testing-worker-functions/">In a recent post</a> we discussed a method for testing Workers. We’ll reuse this method here to test a really simple Worker script below which simply returns <code>Hello World!</code> in the body of the response. We will name our Worker <code>worker.js</code>.</p>
            <pre><code>addEventListener('fetch', event =&gt; {
  event.respondWith(handleRequest(event.request))
})
async function handleRequest(request) {
  return new Response('Hello World!')
}</code></pre>
            
    <div>
      <h3>The Test</h3>
      <a href="#the-test">
        
      </a>
    </div>
    <p>We will create a single test case following the method discussed <a href="/unit-testing-worker-functions/">unit testing blog</a> post.</p>
            <pre><code>before(async function () {
   Object.assign(global, new (require('@dollarshaveclub/cloudworker'))(require('fs').readFileSync('worker.js', ‘utf8’)).context)
})
// replace worker.js here with the name of your worker file
const assert = require('assert')

describe('Worker Test', function() {
    it('Response with a body that says hello', async function () {
    var url = new URL('https://travis.example.com')
    var req = new Request(url)
    var res = await handleRequest(req)
    var body = await res.text()
    assert.equal(body, 'Hello World!')
    })
})</code></pre>
            <p>Then we’ll update our <code>package.json</code> file to include:</p>
            <pre><code>"scripts": {
  "test": "mocha"
}</code></pre>
            <p>And install <code>mocha</code> with <code>npm install mocha --save-dev</code> and <code>cloudworker</code> with <code>npm install @dollarshaveclub/cloudworker --save-dev</code>.</p>
    <div>
      <h3>serverless.yml</h3>
      <a href="#serverless-yml">
        
      </a>
    </div>
    <p>Next, we’ll need a <code>serverless.yml</code> file to deploy the worker. This is a config file which is used by the <a href="https://serverless.com/framework/docs/providers/cloudflare/">Serverless Framework</a> to deploy serverless apps to supported providers. We became a provider some <a href="/serverless-cloudflare-workers/">time ago</a>, and we will use the framework to deploy our Workers in this example.</p><p>We will run the <code>sls deploy</code> command in our Travis config and it will pick up our <code>serverless.yml</code> to deploy the Worker for us. <code>serverless.yml</code> will reference <code>ENV</code> variables which we will pass to Travis in the final section of the post.</p><p><b>NOTE</b>: You can deploy with any arbitrary script. We’re using the Serverless Framework in this example because we already <a href="https://developers.cloudflare.com/workers/deploying-workers/serverless/">integrate</a> with them and getting started is straightforward.</p><p>Our <code>serverless.yml</code> will look like:</p>
            <pre><code>service:
  name: travis-example
  
provider:
  name: cloudflare
  
config:
  accountId: ${env:CLOUDFLARE_ACCOUNT_ID}
  zoneId: ${env:CLOUDFLARE_ZONE_ID}
  
plugins:
  - serverless-cloudflare-workers
  
functions:
  deploy-from-travis:
    name: travis-deployed-worker
    script: worker</code></pre>
            <p>Make sure to install both the Serverless Framework, and the Cloudflare Workers plugin with <code>npm install --save serverless</code> and <code>npm install --save serverless-cloudflare-workers</code>.</p>
    <div>
      <h3>travis.yml</h3>
      <a href="#travis-yml">
        
      </a>
    </div>
    <p>Below you’ll see the final <code>.travis.yml</code> and we’ll walk through each piece of it.</p>
            <pre><code>language: node_js
node_js:
  - "node"
  
deploy:
  - provider: script
  script: sls deploy
  skip_cleanup: true
  on:
    branch: main</code></pre>
            <p>Before diving in, Travis has some great resources on deploying <code>node.js</code> projects <a href="https://docs.travis-ci.com/user/languages/javascript-with-nodejs/">here</a>. While this isn’t strictly what we’re doing, it’s a great jumping off point.</p><p>So what does this <code>.travis.yml</code> mean? First, we’re telling Travis CI to use the most recent <code>node.js</code> image (you have the option to specify). Then we specify the command to run to actually do the deployment, <code>sls deploy</code>, but only when the main branch is involved in the build. Travis will run <code>npm test</code> for us as it’s default for any <code>node.js</code> project, which will execute our unit tests.</p><p>The <code>skip_cleanup: true</code> prevents any conflicts with <code>git</code> during the test and deploy process.</p>
    <div>
      <h3>Configuring Travis</h3>
      <a href="#configuring-travis">
        
      </a>
    </div>
    <p>Finally! We’re almost there. Setting up Travis CI is really simple. Once you’ve got your account created, make sure your authorize Travis to access the repo which contains the worker, your tests, <code>.travis.yml</code>, and your <code>.serverless.yml</code>.</p><p>Next up is adding environmental variables to the build. In this case it’s going to be our <code>CLOUDFLARE_AUTH_EMAIL</code> and <code>CLOUDFLARE_AUTH_KEY</code> values which Serverless picks up to auth API requests.</p><p>I also add <code>CLOUDFLARE_ACCOUNT_ID</code> and <code>CLOUDFLARE_ZONE_ID</code> as we referenced them in <code>serverless.yml</code>. Finally I set <code>SLS_DEBUG=*</code>, just to catch any issues from Serverless.</p><p>You can add these <code>ENV</code> variables in a variety of ways outlined <a href="https://docs.travis-ci.com/user/environment-variables">here</a>. In this example we’re going to add them directly in the Travis UI so they don’t show up anywhere in the repo (as some of them are sensitive).</p><p>Navigate to the repo in the Travis UI, and hit the ‘more options’ dropdown to add ENV variables.</p>
    <div>
      <h3>Complete!</h3>
      <a href="#complete">
        
      </a>
    </div>
    <p>Now PRs will trigger a test build, and a merge to main a test build and a deployment! Go ahead and test it out.</p><p>And that’s it! Did you find this useful? Please let us know if we can make this tutorial better. Thanks.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">7wL7aKM3sN6eZgmRqemTk6</guid>
            <dc:creator>Tom Brightbill</dc:creator>
        </item>
        <item>
            <title><![CDATA[Unit Testing Worker Functions]]></title>
            <link>https://blog.cloudflare.com/unit-testing-worker-functions/</link>
            <pubDate>Fri, 15 Mar 2019 14:17:11 GMT</pubDate>
            <description><![CDATA[ If you were not aware, Cloudflare Workers lets you run Javascript in all 165+ of our Data Centers. We’re delighted to see some of the creative applications of Workers. As the use cases grow in complexity, the need to smoke test your code also grows.  ]]></description>
            <content:encoded><![CDATA[ <p>If you were not aware, Cloudflare Workers lets you run Javascript in all 165+ of our Data Centers. We’re delighted to see some of the creative applications of Workers. As the use cases grow in complexity, the need to smoke test your code also grows.  </p><p>More specifically, if your Worker includes a number of functions, it’s important to ensure each function does what it’s intended to do in addition to ensuring the output of the entire Worker returns as expected.</p><p>In this post, we’re going to demonstrate how to unit test Cloudflare Workers, and their individual functions, with <a href="https://github.com/dollarshaveclub/cloudworker">Cloudworker</a>, created by the Dollar Shave Club engineering team.</p><p>Dollar Shave Club is a Cloudflare customer, and they created Cloudworker, a mock for the Workers runtime, for testing purposes. We’re really grateful to them for this. They were kind enough to <a href="/cloudworker-a-local-cloudflare-worker-runner/">post on our blog</a> about it.</p><p>This post will demonstrate how to abstract away Cloudworker, and test Workers with the same syntax you write them in.</p>
    <div>
      <h3>Example Script</h3>
      <a href="#example-script">
        
      </a>
    </div>
    <p>Before we get into configuring Cloudworker, let’s introduce the simple script we are going to test against in our example. As you can see this script contains two functions, both of which contribute to the response to the client.</p>
            <pre><code>addEventListener('fetch', event =&gt; {
 event.respondWith(handleRequest(event.request))
})

async function addition(a, b) {
  return a + b
}

async function handleRequest(request) {
  const added = await addition(1,3)
  return new Response(`The Sum is ${added}!`)
}</code></pre>
            <p>This script will be active for the route <code>worker.example.com</code>.</p>
    <div>
      <h3>Directory Set Up</h3>
      <a href="#directory-set-up">
        
      </a>
    </div>
    <p>After I’ve created a new npm ( <code>npm init</code> ) project in a new directory, I placed my <code>worker.js</code> file inside, containing the above, and created the folder <code>test</code> which contains <code>worker-test.js</code>. The structure is laid out below.</p>
            <pre><code>.
----- worker.js
----- test
      . worker-test.js
----- node_modules
----- package.json
----- package-lock.json.</code></pre>
            <p>Next I need to install Cloudworker ( <code>npm install @dollarshaveclub/cloudworker --save-dev</code> ) and the Mocha testing framework ( <code>npm install mocha --save-dev</code> ) if you do not have it installed globally. Make sure that <code>package.json</code> reflects a value of <code>mocha</code> for <code>tests</code>, like:</p>
            <pre><code>"scripts": {
    "test": "mocha"
  }</code></pre>
            <p>Now we can finally write some tests! Luckily, <code>mocha</code> has <code>async/await</code> support which is going to make this very simple.  The idea is straightforward: Cloudworker allows you to place a Worker in development in front of an HTTP request and inspect the response.</p>
    <div>
      <h3>Writing Tests!</h3>
      <a href="#writing-tests">
        
      </a>
    </div>
    <p>Before any test logic, we’ll place two lines at the top of the test file ( <code>worker-test.js</code> ). The first line assigns all property values from Cloudworker and our Worker script to the global context before every <code>async function()</code> is run in mocha. The second line requires <code>assert</code>, which is commonly used to compare an expected output to a mocked output.</p>
            <pre><code>before(async function () {
   Object.assign(global, new (require('@dollarshaveclub/cloudworker'))(require('fs').readFileSync('worker.js', ‘utf8’)).context);
});

// You will replace worker.js with the relative path to your worker

const assert = require('assert')</code></pre>
            <p>Now, testing looks a lot more like a Worker itself as we access to all the underlying functions used by Cloudworker AND the Worker script.</p>
            <pre><code>describe('Worker Test', function() {

    it('returns a body that says The Sum is 4', async function () {
        let url = new URL('https://worker.example.com')
        let req = new Request(url)
        let res = await handleRequest(req)
        let body = await res.text()
        assert.equal(body, 'The Sum is 4!')
    })

    it('does addition properly', async function() {
        let res = await addition(1, 1)
        assert.equal(res, 2)
    })

})</code></pre>
            <p>We can test individual functions with our Worker this way, as shown above with the <code>addition()</code> function call. This is really powerful and allows for more confidence when deploying complex workers as you can test each component that makes up the script. We hope this was useful and welcome any feedback.</p> ]]></content:encoded>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">6syqbsFbfzN9D5f7SEG6kU</guid>
            <dc:creator>Tom Brightbill</dc:creator>
            <dc:creator>Tim Obezuk</dc:creator>
        </item>
        <item>
            <title><![CDATA[Deploying Workers with GitHub Actions + Serverless]]></title>
            <link>https://blog.cloudflare.com/deploying-workers-with-github-actions-serverless/</link>
            <pubDate>Fri, 01 Mar 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ If you weren’t aware, Cloudware Workers, our serverless programming platform, allows you to deploy code onto our 165 data centers around the world. 
Want to automatically deploy Workers directly from a GitHub repository? Now you can with our official GitHub Action.  ]]></description>
            <content:encoded><![CDATA[ <p>If you weren’t aware, <a href="https://developers.cloudflare.com/workers/about/">Cloudflare Workers</a>, our serverless programming platform, allows you to deploy code onto our 165 data centers around the world.</p><p>Want to automatically deploy Workers directly from a GitHub repository? Now you can with our official <a href="https://github.com/cloudflare/serverless-action">GitHub Action</a>. This Action is an extension of our existing integration with the Serverless Framework. It runs in a containerized GitHub environment and automatically deploys your Worker to Cloudflare. We chose to utilize the Serverless Framework within our GitHub Action to raise awareness of their awesome work and to enable even more serverless applications to be built with Cloudflare Workers. This Action can be used to deploy individual Worker scripts as well; the Serverless Framework is being used in the background as the deployment mechanism.</p><p>Before going into the details, we’ll quickly go over what GitHub Actions are.</p>
    <div>
      <h3>GitHub Actions</h3>
      <a href="#github-actions">
        
      </a>
    </div>
    <p>GitHub Actions allow you to <a href="https://developer.github.com/actions/creating-workflows/workflow-configuration-options/#action-blocks">trigger commands</a> in reaction to GitHub events. These commands run in containers and can receive environment variables. Actions could trigger build, test, or deployment commands across a variety of providers. They can also be linked and run sequentially (i.e. ‘if the build passes, deploy the app’). Similar to many <a href="https://www.cloudflare.com/learning/serverless/glossary/what-is-ci-cd/">CI/CD</a> tools, these commands run in an isolated container and receive environment variables. You can pass any command to the container that enables your development workflow.</p><p>Actions are a powerful way to your workflow on GitHub, including automating parts of your deployment pipeline directly from where your codebase lives. To that end, we’ve built an Action to deploy a Worker to your Cloudflare zone via our existing Serverless Framework integration for Cloudflare Workers. To visualize the entire flow see below:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7ahViw5UC4o10Kun38A6j4/30f57d868f1586582eea7123fc5f5184/image1.png" />
            
            </figure><p>To see some of the other actions out there today, please <a href="https://github.com/features/actions">see here</a>.</p>
    <div>
      <h3>Why Use the Serverless Framework?</h3>
      <a href="#why-use-the-serverless-framework">
        
      </a>
    </div>
    <p>Serverless applications are deployed without developers needing to worry about provisioning hardware, capacity planning, scaling or paying for equipment when your application isn't running. Unlike most providers who ask you to choose a region for your serverless app to run in, all Cloudflare Workers deploy into our entire global network. Learn more about the <a href="https://www.cloudflare.com/learning/serverless/what-is-serverless/">benefits of serverless</a>.</p><p>The <a href="https://serverless.com/">Serverless Framework</a> is a popular toolkit for deploying applications that are serverless. The advantage of the Serverless Framework is that it offers a common CLI to use across multiple providers which support serverless applications. In <a href="/serverless-cloudflare-workers/">late 2018</a>, Cloudflare integrated Workers deployment into the Serverless CLI. Please check out <a href="https://developers.cloudflare.com/workers/deploying-workers/serverless/">our docs here</a> to get started.</p><p>If you run an entire application in a Worker, there is no cost to a business when the application is idle. If the application runs on our network (Cloudflare has 165 PoPs as of writing this), the app can be incredibly close to the end user, reducing latency by proximity. Additionally, Workers can be a powerful way to augment what you've already built in an existing technology, moving just the authentication or performance-sensitive components into Workers.</p>
    <div>
      <h3>Configuration</h3>
      <a href="#configuration">
        
      </a>
    </div>
    <p>Configuration of the Action is straightforward, with the side benefit of giving you just a ‘little bit’™ of exposure to the Serverless Framework if desired. A repo using this Action can just contain the Worker script to be deployed. If you feed the Action the right ENV variables, we’ll take care of the rest.</p><p>Alternatively you can also provide a <code>serverless.yml</code> in the root of your repo with your worker if you want to override the defaults. Get started learning about our integration with Serverless <a href="https://developers.cloudflare.com/workers/deploying-workers/serverless/">here</a>.</p><p>Your Worker script, and optional <code>serverless.yml</code> are passed into the container which runs the Action for deployment. The Serverless Framework picks up these files and deploys the Worker for you.</p><p>All the relevant variables must be passed to the Action as well, which include various account identifiers as well as your API key. You can check out this <a href="https://help.github.com/articles/creating-a-workflow-with-github-actions/">tutorial</a> from GitHub on how to pass environmental variables to an Action (<i>hint</i>: use the <code>secret</code> variable type for your API key).</p><h6>Support</h6><p>The repository is publicly available <a href="https://github.com/cloudflare/serverless-action">here</a> which goes over the configuration in more technical detail. Any question/suggestions feel free to let us know!</p> ]]></content:encoded>
            <category><![CDATA[GitHub]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Programming]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">3Xd7ZdguOQ4zXKhNHDRQeG</guid>
            <dc:creator>Tom Brightbill</dc:creator>
        </item>
        <item>
            <title><![CDATA[Argo Tunnel + DC/OS]]></title>
            <link>https://blog.cloudflare.com/argo-tunnel-and-dc-os/</link>
            <pubDate>Mon, 21 Jan 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare is proud to partner with Mesosphere on their new Argo Tunnel offering available within their DC/OS (Data Center / Operating System) catalogue! Before diving deeper into the offering itself, we’ll first do a quick overview of the Mesophere platform, DC/OS. ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare is proud to partner with Mesosphere on their new Argo Tunnel offering available within their DC/OS (Data Center / Operating System) catalogue! Before diving deeper into the offering itself, we’ll first do a quick overview of the Mesophere platform, DC/OS.</p>
    <div>
      <h2>What is Mesosphere and DC/OS?</h2>
      <a href="#what-is-mesosphere-and-dc-os">
        
      </a>
    </div>
    <p>Mesosphere DC/OS provides application developers and operators an easy way to consistently deploy and run applications and data services on cloud providers and on-premise infrastructure. The unified developer and operator experience across clouds makes it easy to realize use cases like global reach, resource expansion, and business continuity.</p><p>In this multi cloud world Cloudflare and Mesosphere DC/OS are great complements. Mesosphere DC/OS provides the same common services experience for developers and operators, and Cloudflare provides the same common service access experience across cloud providers. DC/OS helps tremendously for avoiding vendor lock-in to a single provider, while Cloudflare can load balance traffic intelligently (in addition to many other services) at the edge between providers. This new offering will allow you to load balance through the use of Argo Tunnel.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2G5ZOTv2xHKdlDqCus7HnU/bd4cf9e0d12ac9fbb93e6b3a9476b8c0/multicloud-neautral_2x.png" />
            
            </figure>
    <div>
      <h3>Quick Tunnel Refresh</h3>
      <a href="#quick-tunnel-refresh">
        
      </a>
    </div>
    <p>Cloudflare Argo Tunnel is a private connection between your services and Cloudflare. Tunnel makes it such that only traffic that routes through the Cloudflare network can reach your service.</p><p>Cloudflare’s lightweight Argo Tunnel daemon creates an encrypted Tunnel between your origin web server and Cloudflare’s nearest data center — all without opening any public inbound ports. In other words, it’s a private link. Only Cloudflare can see the service and communicate with it, and for the rest of the internet, the service is reachable only through the hostname configured on Cloudflare. Check this out if you’d like to learn more.</p><p>By using Argo Tunnel, DC/OS is able to load balance your traffic to any of your hosts, wherever they are running on Earth, in any cloud provider! Need more instances in Paris? Just launch them! Are instances more affordable in a specific provider? Just launch them there, and thanks to Argo Tunnel and DC/OS your traffic will be directed to exactly where it belongs.</p>
    <div>
      <h3>Requirements</h3>
      <a href="#requirements">
        
      </a>
    </div>
    <p>In order to use this application in DC/OS you must have a zone on Cloudflare with the <a href="/Argo/">Argo service enabled</a>. Argo can be enabled for any plan type and is billed on usage. Because this application requires the use of a DC/OS ‘secrets’ the Enterprise version of DC/OS is required. To get started on Cloudflare please see <a href="https://www.cloudflare.com/plans/">here</a> and sign up for an account. To do the same with DC/OS, please see <a href="https://docs.mesosphere.com/1.12/installing/evaluation/">here</a>.</p>
    <div>
      <h3>Cloudflare Argo Tunnel Support for DC/OS</h3>
      <a href="#cloudflare-argo-tunnel-support-for-dc-os">
        
      </a>
    </div>
    <p>Argo Tunnel is the fast way to make services that run on DC/OS private agents (that are only bound to the DC/OS internal network) accessible over the public internet.</p><p>When you launch the Tunnel for your service, it creates persistent outbound connections to the 2 closest Cloudflare PoPs over which the entire Cloudflare network will route through to reach the service associated with the Tunnel. There is no need to configure DNS, update a NAT configuration, or modify firewall rules (connections are outbound). The Argo Tunnel exposed service gets all the benefits offered by the Cloudflare network (e.g. DDoS protection, CDN and performance, TLS, etc.).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/59NfzsE9nLW3aaKhzrSR1I/f0eb0aaa4d0e97feb5dfa942e9919446/Argo-Tunnel-DC-OS.png" />
            
            </figure><p>The Cloudflare Argo Tunnel Service is available from Mesosphere DC/OS catalog.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ILYxEsM8NriMNpWhNZrf7/fd0f075da7013e08e4d6a0c01e537636/Argo-2.png" />
            
            </figure><p>Configuration of the Argo Tunnel requires you to specify three things.</p><ul><li><p>Cloudflare Hostname - The DNS name of your service on the Cloudflare network. This is the address where you wish your service to be available from on the Internet. (Note: adding a zone to Cloudflare is extremely simple, you can get started here (link: <a href="https://www.cloudflare.com/plans/">https://www.cloudflare.com/plans/</a>)</p></li><li><p>Local Service Url - The local URL of the service that you want to make available, on the machines running Argo Tunnel.</p></li><li><p>Load Balancer Pool - The load balancer pool you want the service to be part of. Use any value you like, keeping the value consistent for tunnels you wish to load balance traffic onto as a unit. Inside Cloudflare you can manage how your traffic is balanced between and inside your pools.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5VsuRFMMuVxqBNba5cUjR0/0f06d0d22282b82610390f3c951bd3e6/LB-Configuration.png" />
            
            </figure><p>Assuming you do that setup for a service in a West Coast DC/OS cluster and an East Coast DC/OS cluster, with a respective us-west and us-east LB pool, then you end up with a Cloudflare load balancer globally balancing traffic between these clusters. The load balancer can be configured to do geosteering, which you can learn more about <a href="https://support.cloudflare.com/hc/en-us/articles/115000081911-Tutorial-How-to-Set-Up-Load-Balancing-Intelligent-Failover-on-Cloudflare">here</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5XLHj1d6PnXMgyoRrj7ICc/84715d0fc25f14045c0a36454d7d1f12/LB-Settings.png" />
            
            </figure><p>For more details see the DC/OS Argo Tunnel <a href="https://github.com/dcos/examples/tree/master/cloudflare-argotunnel">documentation</a>. We hope this partnership is a meaningful step towards a simple multi-cloud solution for DC/OS customers.</p><p>To sign up for Cloudflare click <a href="https://www.cloudflare.com/plans/">here</a> and to sign up for DC/OS click <a href="https://docs.mesosphere.com/1.12/installing/evaluation/">here</a>. This partnership between Cloudflare and Mesosphere we hope will help you drive private secure and performant multi cloud deployments</p> ]]></content:encoded>
            <category><![CDATA[Data Center]]></category>
            <category><![CDATA[Argo Smart Routing]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Cloudflare Tunnel]]></category>
            <category><![CDATA[Connectivity Cloud]]></category>
            <guid isPermaLink="false">61gNuny6p1knOUnyxbpVzZ</guid>
            <dc:creator>Tom Brightbill</dc:creator>
        </item>
    </channel>
</rss>