
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Tue, 14 Apr 2026 18:54:29 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Real World Serverless: Serverless Use Cases and Best Practices]]></title>
            <link>https://blog.cloudflare.com/realworldserverlesssingapore/</link>
            <pubDate>Mon, 17 Dec 2018 18:44:49 GMT</pubDate>
            <description><![CDATA[ Cloudflare Workers has had a very busy 2018. Throughout the year, Workers moved from beta to general availability, continued to expand its footprint as Cloudflare grew to 155 locations, and added new features and services to help developers create increasingly advanced applications. ]]></description>
            <content:encoded><![CDATA[ <p><a href="https://developers.cloudflare.com/workers/">Cloudflare Workers</a> has had a very busy 2018. Throughout the year, Workers moved from beta to general availability, continued to expand its footprint as Cloudflare grew to 155 locations, and added new features and services to help developers create increasingly advanced applications.</p><p>To cap off 2018 we decided hit the road (and then head to the airport) with our Real World Serverless event series in San Francisco, Austin, London, Singapore, Sydney, and Melbourne. It was a great time sharing serverless application development insights we’ve discovered over the past year as well as demonstrating how to build applications with new services like our key value store, <a href="https://developers.cloudflare.com/workers/kv/">Cloudflare Workers KV</a>.</p><p>Below is a recording from our Singapore Real World Serverless event. It included three talks about Serverless technology featuring <a href="https://twitter.com/Miss_Vee22"></a><a href="https://twitter.com/obezuk">Tim Obezuk</a>, <a href="https://twitter.com/stnly">Stanley Tan</a>, and <a href="https://twitter.com/remyguercio">Remy Guercio</a> from Cloudflare. They spoke about the fundamentals of serverless technology, twelve factors of serverless application development, and achieving no ops at scale with network-based serverless.</p><p>If you’d like to join us in person to talk about serverless, we’ll be announcing 2019 event locations starting in the new year.</p><p></p>
    <div>
      <h3><b>About the talks</b></h3>
      <a href="#about-the-talks">
        
      </a>
    </div>
    <p><b>Fundamentals of Serverless Technology - Tim Obezuk (0:00-13:56)</b></p><p>Tim explores the anatomy of Cloudflare’s serverless technology, Cloudflare Workers, and how they can be used to improve availability, build faster websites and save costs. Workers allows you to run Javascript from 150+ data centers around the world.</p><p><b>The Serverless Twelve Factors - Stanley Tan (13:56-22:46)</b></p><p>Developers all know the benefits of the Twelve-Factor App methodology. It is now the industry standard for building modern web app services. Let’s take a look at how this applies to a serverless platform.</p><p><b>Achieving No Ops at Scale with Network-Based Serverless - Remy Guercio (22:46-49:21)</b></p><p>While most major serverless platforms have done an effective job of abstracting the concept of a single server or group of servers, they have yet to make it as easy to deploy globally as it is to deploy to a specific region. Building global applications with region-based serverless providers still requires a significant effort to set up both frontend load balancing and backend data replication. Let’s explore how network-based serverless providers are helping developers build applications of all sizes with a true No Ops mentality.</p><p>Check out our Workers recipes we have listed on our docs <a href="https://developers.cloudflare.com/workers/">here »</a></p> ]]></content:encoded>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Events]]></category>
            <category><![CDATA[Video]]></category>
            <category><![CDATA[Best Practices]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">5COnRqYvzl7YKeqVjL41OP</guid>
            <dc:creator>Andrew Fitch</dc:creator>
            <dc:creator>Remy Guercio</dc:creator>
        </item>
        <item>
            <title><![CDATA[Validating Leaked Passwords with k-Anonymity]]></title>
            <link>https://blog.cloudflare.com/validating-leaked-passwords-with-k-anonymity/</link>
            <pubDate>Wed, 21 Feb 2018 19:00:44 GMT</pubDate>
            <description><![CDATA[ Today, v2 of Pwned Passwords was released as part of the Have I Been Pwned service offered by Troy Hunt. Containing over half a billion real world leaked passwords, this database provides a vital tool for correcting the course of how the industry combats modern threats against password security. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today, <a href="https://www.troyhunt.com/ive-just-launched-pwned-passwords-version-2/">v2 of <i>Pwned Passwords</i> was released</a> as part of the <i>Have I Been Pwned</i> service offered by Troy Hunt. Containing over half a billion real world leaked passwords, this database provides a vital tool for correcting the course of how the industry combats modern threats against password security.</p><p>I have written about how we need to rethink password security and <i>Pwned Passwords v2</i> in the following post: <a href="/how-developers-got-password-security-so-wrong/"><i>How Developers Got Password Security So Wrong</i></a>. Instead, in this post I want to discuss one of the technical contributions Cloudflare has made towards protecting user information when using this tool.</p><p>Cloudflare continues to support <i>Pwned Passwords</i> by providing <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">CDN</a> and security functionality such that the data can easily be made available for download in raw form to organisations to protect their customers. Further, as part of the second iteration of this project, I have also worked with Troy on designing and implementing <a href="https://www.cloudflare.com/learning/security/api/what-is-api-endpoint/">API endpoints</a> that support anonymised <i>range queries</i> to function as an additional layer of security for those consuming the API, that is visible to the client.</p><p>This contribution allows for <i>Pwned Passwords</i> clients to use <i>range queries</i> to search for breached passwords, without having to disclose a complete unsalted password hash to the service.</p>
    <div>
      <h3>Getting Password Security Right</h3>
      <a href="#getting-password-security-right">
        
      </a>
    </div>
    <p>Over time, the industry has realised that complex password composition rules (such as requiring a minimum number of special characters) have done little to improve user behaviour in making stronger passwords; they have done little to prevent users from putting personal information in passwords, avoiding common passwords or prevent the use of previously breached passwords<a href="#fn1">[1]</a>. <a href="https://www.cloudflare.com/learning/bots/what-is-credential-stuffing/">Credential Stuffing</a> has become a real threat recently; usernames and passwords are obtained from compromised websites and then injected into other websites until you find user accounts that are compromised.</p><p>This fundamentally works because users reuse passwords across different websites; when one set of credentials is breached on one site, this can be reused on other websites. Here are some examples of how credentials can be breached from insecure websites:</p><ul><li><p>Websites which don't use <a href="https://www.cloudflare.com/learning/bots/what-is-rate-limiting/">rate limiting</a> or challenge login requests can have a user's log-in credentials breached using brute force attacks of common passwords for a given user,</p></li><li><p>database dumps from hacked websites can be taken offline and the password hashes can be cracked; modern GPUs make this very efficient for dictionary passwords (even with algorithms like Argon2, PBKDF2 and BCrypt),</p></li><li><p>many websites continue not to use any form of password hashing, once breached they can be captured in raw form,</p></li><li><p>Proxy Attacks or hijacking a web server can allow for capturing passwords before they're hashed.</p></li></ul><p>This becomes a problem with password reuse; having obtained real life username/password combinations, they can be injected into other websites (such as payment gateways, social networks, etc) until access is obtained to more accounts (often of a higher value than the original compromised site).</p><p>Under <a href="https://pages.nist.gov/800-63-3/sp800-63b.html">recent NIST guidance</a>, it is a requirement, when storing or updating passwords, to ensure they do not contain values which are commonly used, expected or compromised<a href="#fn2">[2]</a>. Research has found that 88.41% of users who received a <i>fear appeal</i> later set unique passwords, whilst only 4.45% of users who did not receive a fear appeal would set a unique password<a href="#fn3">[3]</a>.</p><p>Unfortunately, there are a lot of leaked passwords out there; the downloadable raw data from <i>Pwned Passwords</i> currently contains over 30 GB in password hashes.</p>
    <div>
      <h3>Anonymising Password Hashes</h3>
      <a href="#anonymising-password-hashes">
        
      </a>
    </div>
    <p>The key problem in checking passwords against the old <i>Pwned Passwords</i> API (and all similar services) lies in how passwords are checked; with users being effectively required to submit unsalted hashes of passwords to identify if the password is breached. The hashes must be unsalted, as salting them makes them computationally difficult to search quickly.</p><p>Currently there are two choices that are available for validating whether a password is or is not leaked:</p><ul><li><p>Submit the password (in an unsalted hash) to a third-party service, where the hash can potentially be stored for later cracking or analysis. For example, if you make an API call for a leaked password to a third-party API service using a WordPress plugin, the IP of the request can be used to identify the WordPress installation and then breach it when the password is cracked (such as from a later disclosure); or,</p></li><li><p>download the entire list of password hashes, uncompress the dataset and then run a search to see if your password hash is listed.</p></li></ul><p>Needless to say, this conflict can seem like being placed between a <a href="https://www.cloudflare.com/learning/security/how-to-improve-wordpress-security/">security-conscious rock</a> and an insecure hard place.</p>
    <div>
      <h3>The Middle Way</h3>
      <a href="#the-middle-way">
        
      </a>
    </div>
    
    <div>
      <h4>The Private Set Intersection (PSI) Problem</h4>
      <a href="#the-private-set-intersection-psi-problem">
        
      </a>
    </div>
    <p>Academic computer scientists have considered the problem of how two (or more) parties can validate the intersection of data (from two or more unequal sets of data either side already has) without either sharing information about what they have. Whilst this work is exciting, unfortunately these techniques are new and haven't been subject to long-term review by the cryptography community and cryptographic primitives have not been implemented in any major libraries. Additionally (but critically), PSI implementations have substantially higher overhead than our <i>k</i>-Anonymity approach (particularly for communication<a href="#fn4">[4]</a>). Even the current academic state-of-the-art is not with acceptable performance bounds for an API service, with the communication overhead being equivalent to downloading the entire set of data.</p>
    <div>
      <h4>k-Anonymity</h4>
      <a href="#k-anonymity">
        
      </a>
    </div>
    <p>Instead, our approach adds an additional layer of security by utilising a mathematical property known as <i>k</i>-Anonymity and applying it to password hashes in the form of <i>range queries</i>. As such, the <i>Pwned Passwords</i> API service never gains enough information about a non-breached password hash to be able to breach it later.</p><p><i>k</i>-Anonymity is used in multiple fields to release anonymised but workable datasets; for example, so that hospitals can release patient information for medical research whilst withholding information that discloses personal information. Formally, a data set can be said to hold the property of <i>k</i>-Anonymity, if for every record in a released table, there are <code>k − 1</code> other records identical to it.</p><p>By using this property, we are able to seperate hashes into anonymised "buckets". A client is able to anonymise the user-supplied hash and then download all leaked hashes in the same anonymised "bucket" as that hash, then do an offline check to see if the user-supplied hash is in that breached bucket.</p><p>In more concrete terms:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6FkAe2DhLMl2u6Dd4ZXx0P/fc76ccce5fe7570981625977f7d85ace/hash-bucket.png" />
            
            </figure><p>In essence, we turn the table on password derivation functions; instead of seeking to salt hashes to the point at which they are unique (against identical inputs), we instead introduce ambiguity into what the client is requesting.</p><p>Given hashes are essentially fixed-length hexadecimal values, we are able to simply truncate them, instead of having to resort to a decision tree structure to filter down the data. This does mean buckets are of unequal sizes but allows clients to query in a single API request.</p><p>This approach can be implemented in a trivial way. Suppose a user enters the password <code>test</code> into a login form and the service they’re logging into is programmed to validate whether their password is in a database of leaked password hashes. Firstly the client will generate a hash (in our example using SHA-1) of <code>a94a8fe5ccb19ba61c4c0873d391e987982fbbd3</code>. The client will then truncate the hash to a predetermined number of characters (for example, 5) resulting in a Hash Prefix of <code>a94a8</code>. This Hash Prefix is then used to query the remote database for all hashes starting with that prefix (for example, by making a HTTP request to <code>example.com/a94a8.txt</code>). The entire hash list is then downloaded and each downloaded hash is then compared to see if any match the locally generated hash. If so, the password is known to have been leaked.</p><p>As this can easily be implemented over HTTP, client side caching can easily be used for performance purposes; the API is simple enough for developers to implement with little pain.</p><p>Below is a simple Bash implementation of how the <i>Pwned Passwords</i> API can be queried using <i>range queries</i> (<a href="https://gist.github.com/IcyApril/56c3fdacb3a640f37c245e5813b98b99">Gist</a>):</p>
            <pre><code>#!/bin/bash

echo -n Password:
read -s password
echo
hash="$(echo -n $password | openssl sha1)"
upperCase="$(echo $hash | tr '[a-z]' '[A-Z]')"
prefix="${upperCase:0:5}"
response=$(curl -s https://api.pwnedpasswords.com/range/$prefix)
while read -r line; do
  lineOriginal="$prefix$line"
  if [ "${lineOriginal:0:40}" == "$upperCase" ]; then
    echo "Password breached."
    exit 1
  fi
done &lt;&lt;&lt; "$response"

echo "Password not found in breached database."
exit 0</code></pre>
            
    <div>
      <h3>Implementation</h3>
      <a href="#implementation">
        
      </a>
    </div>
    <p>Hashes (even in unsalted form) have two useful properties that are useful in anonymising data.</p><p>Firstly, the Avalanche Effect means that a small change in a hash results in a very different output; this means that you can't infer the contents of one hash from another hash. This is true even in truncated form.</p><p>For example; the Hash Prefix <code>21BD1</code> contains 475 seemingly unrelated passwords, including:</p>
            <pre><code>lauragpe
alexguo029
BDnd9102
melobie
quvekyny</code></pre>
            <p>Further, hashes are fairly uniformally distributed. If we were to count the original 320 million leaked passwords (in Troy's dataset) by the first hexadecimal charectar of the hash, the difference between the hashes associated to the largest and the smallest Hash Prefix is ≈ 1%. The chart below shows hash count by their first hexadecimal digit:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7IqJBkvelNZ0QVFoDEfrKY/96682f7d36961752b3f2c14bc6235f20/hashes_by_hash_prefix.png" />
            
            </figure><p>Algorithm 1 provides us a simple check to discover how much we should truncate hashes by to ensure every "bucket" has more than one hash in it. This requires every hash to be sorted by hexadecimal value. This algorithm, including an initial merge sort, runs in roughly <code>O(n log n + n)</code> time (worst-case):</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/rdI6HJEupI0hSGAsDvr4m/74045dd1a84842e41127e4a9d9213480/Screen-Shot-2018-02-18-at-23.37.15.png" />
            
            </figure><p>After identifying the Maximum Hash Prefix length, it is fairly easy to seperate the hashes into seperate buckets, as described in Algorithm 3:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3peODDMG8GqOHrkA48VOPt/168f1da40b0b41342d571791c3789541/Screen-Shot-2018-02-18-at-23.02.02.png" />
            
            </figure><p>This implementation was originally evaluated on a dataset of over 320 million breached passwords and we find the Maximum Prefix Length that all hashes can be truncated to, whilst maintaining the property k-anonymity, is 5 characters. When hashes are grouped together by a Hash Prefix of 5 characters, we find the median number of hashes associated with a Hash Prefix is 305. With the range of response sizes for a query varying from 8.6KB to 16.8KB (a median of 12.2KB), the dataset is usable in many practical scenarios and is certainly a good response size for an API client.</p><p>On the new <i>Pwned Password</i> dataset (with over half a billion) passwords and whilst keeping the Hash Prefix length 5; the average number of hashes returned is 478 - with the smallest being 381 (<code>E0812</code> and <code>E613D</code>) and the largest Hash Prefix being 584 (<code>00000</code> and <code>4A4E8</code>).</p><p>Splitting the hashes into buckets by a Hash Prefix of 5 would mean a maximum of 16^5 = 1,048,576 buckets would be utilised (for SHA-1), assuming that every possible Hash Prefix would contain at least one hash. In the datasets we found this to be the case and the amount of distinct Hash Prefix values was equal to the highest possible quantity of buckets. Whilst for secure hashing algorithms it is computationally inefficient to invert the hash function, it is worth noting that as the length of a SHA-1 hash is a total of 40 hexadecimal characters long and 5 characters is utilised by the Hash Prefix, the total number of possible hashes associated with a Hash Prefix is 16^{35} ≈ 1.39E42.</p>
    <div>
      <h3>Important Caveats</h3>
      <a href="#important-caveats">
        
      </a>
    </div>
    <p>It is important to note that where a user's password is already breached, an API call for a specific range of breached passwords can reduce the search candidates used in a <a href="https://www.cloudflare.com/learning/bots/brute-force-attack/">brute-force attack</a>. Whilst users with existing breached passwords are already vulnerable to brute-force attacks, searching for a specific range can help reduce the amount of search candidates - although the API service would have no way of determining if the client was or was not searching for a password that was breached. Using a deterministic algorithm to run queries for other Hash Prefixes can help reduce this risk.</p><p>One reason this is important is that this implementation does not currently guarantee <i>l</i>-diversity, meaning a bucket may contain a hash which is of substantially higher use than others. In the future we hope to use percentile-based usage information from the original breached data to better guarantee this property.</p><p>For general users, <i>Pwned Passwords</i> is usually exposed via web interface, it uses a JavaScript client to run this process; if the origin web server was hijacked to change the JavaScript being returned, this computation could be removed (and the password could be sent to the hijacked origin server). Whilst JavaScript requests are somewhat transparent to the client (in the case of a developer), this may not be depended on and for technical users, non-web client based requests are preferable.</p><p>The original use-case for this service was to be deployed privately in a Cloudflare data centre where our services can use it to enhance user security, and use <i>range queries</i> to complement the existing transport security used. Depending on your risks, it's safer to deploy this service yourself (in your own data centre) and use the <i>k</i>-anonymity approach to validate passwords where services do not themselves have the resources to store an entire database of leaked password hashes.</p><p>I would strongly recommend against storing the <i>range queries</i> used by users of your service, but if you do for whatever reason, store them as aggregate analytics such that they cannot be linked back to any given user's password.</p>
    <div>
      <h3>Final Thoughts</h3>
      <a href="#final-thoughts">
        
      </a>
    </div>
    <p>Going forward, as we test this technology more, Cloudflare is looking into how we can use a private deployment of this service to better offer security functionality, both for log-in requests to our dashboard and for customers who want to prevent against credential stuffing on their own websites using our edge network. We also seek to consider how we can incorporate recent work on the Private Set Interesection Problem alongside considering <i>l</i>-diversity for additional security guarantees. As always; we'll keep you updated right here, on our blog.</p><hr /><ol><li><p>Campbell, J., Ma, W. and Kleeman, D., 2011. Impact of restrictive composition policy on user password choices. Behaviour &amp; Information Technology, 30(3), pp.379-388. <a href="#fnref1">↩︎</a></p></li><li><p>Grassi, P. A., Fenton, J. L., Newton, E. M., Perlner, R. A., Regenscheid, A. R., Burr, W. E., Richer, J. P., Lefkovitz, N. B., Danker, J. M., Choong, Y.-Y., Greene, K. K., and Theofanos, M. F. (2017). NIST Special Publication 800-63B Digital Identity Guidelines, chapter Authentication and Lifecycle Management. National Institute of Standards and Technology, U.S. Department of Commerce. <a href="#fnref2">↩︎</a></p></li><li><p>Jenkins, Jeffrey L., Mark Grimes, Jeffrey Gainer Proudfoot, and Paul Benjamin Lowry. "Improving password cybersecurity through inexpensive and minimally invasive means: Detecting and deterring password reuse through keystroke-dynamics monitoring and just-in-time fear appeals." Information Technology for Development 20, no. 2 (2014): 196-213. <a href="#fnref3">↩︎</a></p></li><li><p>De Cristofaro, E., Gasti, P. and Tsudik, G., 2012, December. Fast and private computation of cardinality of set intersection and union. In International Conference on Cryptology and Network Security (pp. 218-231). Springer, Berlin, Heidelberg. <a href="#fnref4">↩︎</a></p></li></ol> ]]></content:encoded>
            <category><![CDATA[Passwords]]></category>
            <category><![CDATA[Best Practices]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Salt]]></category>
            <guid isPermaLink="false">4k6ry6xuTMJwpexgEzV2hB</guid>
            <dc:creator>Junade Ali</dc:creator>
        </item>
        <item>
            <title><![CDATA[How Developers got Password Security so Wrong]]></title>
            <link>https://blog.cloudflare.com/how-developers-got-password-security-so-wrong/</link>
            <pubDate>Wed, 21 Feb 2018 19:00:11 GMT</pubDate>
            <description><![CDATA[ Both in our real lives, and online, there are times where we need to authenticate ourselves - where we need to confirm we are who we say we are. This can be done using three things. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Both in our real lives, and online, there are times where we need to authenticate ourselves - where we need to confirm we are who we say we are. This can be done using three things:</p><ul><li><p>Something you <i>know</i></p></li><li><p>Something you <i>have</i></p></li><li><p>Something you <i>are</i></p></li></ul><p>Passwords are an example of something you <i>know</i>; they were introduced in 1961 for computer authentication for a time-share computer in MIT. Shortly afterwards, a PhD researcher breached this system (by being able to simply download a list of unencrypted passwords) and used the time allocated to others on the computer.</p><p>As time has gone on; developers have continued to store passwords insecurely, and users have continued to set them weakly. Despite this, no viable alternative has been created for password security. To date, no system has been created that retains all the benefits that passwords offer as researchers have rarely considered real world constraints<a href="#fn1">[1]</a>. For example; when using fingerprints for authentication, engineers often forget that there is a sizable percentage of the population that do not have usable fingerprints or hardware upgrade costs.</p>
    <div>
      <h3>Cracking Passwords</h3>
      <a href="#cracking-passwords">
        
      </a>
    </div>
    <p>In the 1970s, people started thinking about how to better store passwords and cryptographic hashing started to emerge.</p><p>Cryptographic hashes work like trapdoors; whilst it's easy to hash a password, it's far harder to turn that "hash" back into the original output (or computationally difficult for an ideal hashing algorithm). They are used in a lot of things from speeding up searching from files, to the One Time Password generators in banks.</p><p>Passwords should ideally use specialised hashing functions like Argon2, BCrypt or PBKDF2, they are modified to prevent Rainbow Table attacks.</p><p>If you were to hash the password, <code>p4$$w0rd</code> using the SHA-1 hashing algorithm, the output would be <code>6c067b3288c1b5c791afa04e12fb013ed2e84d10</code>. This output is the same every time the algorithm is run. As a result, attackers are able to create Rainbow Tables which contain the hashes of common passwords and then this information is used to break password hashes (where the password and hash is listed in a Rainbow Table).</p><p>Algorithms like BCrypt essentially salt passwords before they hash them using a random string. This random string is stored alongside the password hash and is used to help make the password harder to crack by making the output unique. The hashing process is repeated many times (defined by a difficulty variable), each time adding the random salt onto the output of the hash and rerunning the hash computation.</p><p>For example; the BCrypt hash <code>$2a$10$N9qo8uLOickgx2ZMRZoMyeIjZAgcfl7p92ldGxad68LJZdL17lhWy</code> starts with <code>$2a$10$</code> which indicates the algorithm used is BCrypt and contains a random salt of <code>N9qo8uLOickgx2ZMRZoMye</code> and a resulting hash of <code>IjZAgcfl7p92ldGxad68LJZdL17lhWy</code>. Storing the salt allows the password hash to be regenerated identically when the input is known.</p><p>Unfortunately; salting is no longer enough, passwords can be cracked quicker and quicker using modern GPUs (specialised at doing the same task over and over). When a site suffers a security breach, users passwords can be taken offline in database dumps in order to be cracked offline.</p><p>Additionally; websites that fail to rate limit login requests or use captchas, can be challenged by Brute Force attacks. For a given user, an attacker will repeatedly try different (but common) passwords until they gain access to a given users account.</p><p>Sometimes sites will lock users out after a handful of failed login attempts, attacks can instead be targeted to move on quickly to a new account after the most common set of a passwords has been attempted. Lists like the following (in some cases with many, many more passwords) can be attempted to breach an account:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5uMRyoSntS2dkOn7C9Zc6v/c882908f5dfc1b8b5c257e8583f4b27a/common-weak-passwords.png" />
            
            </figure><p>The industry has tried to combat this problem by requiring password composition rules; requiring users comply to complex rules before setting passwords (requiring a minimum amount of numbers or punctuation symbols). Research has shown that this work hasn't helped combat the problem of password reuse, weak passwords or users putting personal information in passwords.</p>
    <div>
      <h4>Credential Stuffing</h4>
      <a href="#credential-stuffing">
        
      </a>
    </div>
    <p>Whilst it may seem that this is only a bad sign for websites that store passwords weakly, Credential Stuffing makes this problem even worse.</p><p>It is common for users to reuse passwords from site to site, meaning a username and password from a compromised website can be used to breach far more important information - like online banking gateways or government logins. When a password is reused - it takes just one website being breached, to gain access to others that a users has credentials for.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6l6Zm2PWsT6cXcXHpPevKX/87148160a2ed6d72bfd6288e156ecd28/this-is-not-fine-009-a7b6e6.png" />
            
            </figure><p> <a href="https://thenib.com/this-is-not-fine">This Is Not Fine - The Nib</a></p>
    <div>
      <h3>Fixing Passwords</h3>
      <a href="#fixing-passwords">
        
      </a>
    </div>
    <p>There are fundamentally three things that need to be done to fix this problem:</p><ul><li><p>Good UX to improve User Decisions</p></li><li><p>Improve Developer Education</p></li><li><p>Eliminating reuse of breached passwords</p></li></ul>
    <div>
      <h4>How Can I Secure Myself (or my Users)?</h4>
      <a href="#how-can-i-secure-myself-or-my-users">
        
      </a>
    </div>
    <p>Before discussing the things we're doing, I wanted to briefly discuss what you can do to help protect yourself now. For most users, there are three steps you can immediately take to help yourself.</p><p>Use a Password Manager (like 1Password or LastPass) to set random, unique passwords for every site. Additionally, look to enable Two-Factor Authentication where possible; this uses something you <i>have</i>, in addition to the password you <i>know</i>, to validate you. This will mean, alongside your password, you have to enter a short-lived password from a device like your phone before being able to login to any site.</p><p>Two-Factor Authentication is supported on many of the worlds most popular social media, banking and shopping sites. You can find out how to enable it on popular websites at <a href="https://www.turnon2fa.com/tutorials/">turnon2fa.com</a>. If you are a developer, you should take efforts to ensure you support Two Factor Authentication.</p><p>Set a secure memorable password for your password manager; and yes, turn on Two-Factor Authentication for it (and keep your backup codes safe). You can find additional security tips (including tips on how to create a secure main password) in my blog post: <a href="/cyber-security-advice-for-your-parents/">Simple Cyber Security Tips</a>.</p><p>Developers should look to abolish bad practice composition rules (and simplify them as much as possible). Password expiration policies do more harm than good, so seek to do away with them. For further information refer to the blog post by the UK's National Cyber Security Centre: <a href="https://www.ncsc.gov.uk/articles/problems-forcing-regular-password-expiry">The problems with forcing regular password expiry</a>.</p><p>Finally; Troy Hunt has an excellent blog post on passwords for users and developers alike: <a href="https://www.troyhunt.com/passwords-evolved-authentication-guidance-for-the-modern-era/">Passwords Evolved: Authentication Guidance for the Modern Era</a></p>
    <div>
      <h4>Improving Developer Education</h4>
      <a href="#improving-developer-education">
        
      </a>
    </div>
    <p>Developers should seek to build a culture of security in the organisations where they work; try and talk about security, talk about the benefits of challenging malicious login requests and talk about password hashing in simple terms.</p><p>If you're working on an open-source project that handles authentication; expose easy password hashing APIs - for example the <code>password_hash</code>, <code>password_​needs_​rehash</code> &amp; <code>password_verify</code> functions in modern PHP versions.</p>
    <div>
      <h4>Eliminating Password Reuse</h4>
      <a href="#eliminating-password-reuse">
        
      </a>
    </div>
    <p>We know that complex password composition rules are largely ineffective, and recent guidance has followed suit. A better alternative to composition rules is to block users from signing up with passwords which are known to have been breached. Under <a href="https://pages.nist.gov/800-63-3/sp800-63b.html">recent NIST guidance</a>, it is a requirement, when storing or updating passwords, to ensure they do not contain values which are commonly used, expected or compromised<a href="#fn2">[2]</a>.</p><p>This is easier said than done, the recent version of Troy Hunt's <i>Pwned Passwords</i> database contains over half a billion passwords (over 30 GB uncompressed). Whilst developers can use API services to check if a password is reused, this requires either sending the raw password, or the password in an unsalted hash. This can be especially problematic when multiple services handle authentication in a business, and each has to store a large quantity of passwords.</p><p>This is a problem I've started looking into recently; as part of our contribution to Troy Hunt's <i>Pwned Passwords</i> database, I have designed a <i>range search</i> API that allows developers to check if a password is reused without needing to share the password (even in hashed form) - instead only needing to send a short segment of the cryptographic hash used. You can find more information on this contribution in the post: <a href="/validating-leaked-passwords-with-k-anonymity/">Validating Leaked Passwords with k-Anonymity</a>.</p><p>Version 2 of <i>Pwned Passwords</i> is now available - you can find more information on how it works on Troy Hunt's blog post "<a href="https://www.troyhunt.com/ive-just-launched-pwned-passwords-version-2/">I've Just Launched Pwned Passwords, Version 2</a>".</p><hr /><ol><li><p>Bonneau, J., Herley, C., Van Oorschot, P.C. and Stajano, F., 2012, May. The quest to replace passwords: A framework for comparative evaluation of web authentication schemes. In Security and Privacy (SP), 2012 IEEE Symposium on (pp. 553-567). IEEE. <a href="#fnref1">↩︎</a></p></li><li><p>Grassi, P. A., Fenton, J. L., Newton, E. M., Perlner, R. A., Regenscheid, A. R., Burr, W. E., Richer, J. P., Lefkovitz, N. B., Danker, J. M., Choong, Y.-Y., Greene, K. K., and Theofanos, M. F. (2017). NIST Special Publication 800-63B Digital Identity Guidelines, chapter Authentication and Lifecycle Management. National Institute of Standards and Technology, U.S. Department of Commerce. <a href="#fnref2">↩︎</a></p></li></ol> ]]></content:encoded>
            <category><![CDATA[Passwords]]></category>
            <category><![CDATA[Best Practices]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Salt]]></category>
            <guid isPermaLink="false">6lsrdeZvgI7O6CidQWI49j</guid>
            <dc:creator>Junade Ali</dc:creator>
        </item>
        <item>
            <title><![CDATA[SEO Performance in 2018 Using Cloudflare]]></title>
            <link>https://blog.cloudflare.com/seo-performance-in-2018-using-cloudflare/</link>
            <pubDate>Sun, 28 Jan 2018 15:00:00 GMT</pubDate>
            <description><![CDATA[ For some businesses SEO is a bad word, and for good reason. Google and other search engines keep their algorithms a well-guarded secret making SEO implementation not unlike playing a game where the referee won’t tell you all the rules.  ]]></description>
            <content:encoded><![CDATA[ <p>For some businesses SEO is a bad word, and for good reason. Google and other search engines keep their algorithms a well-guarded secret making SEO implementation not unlike playing a game where the referee won’t tell you all the rules. While SEO experts exist, the ambiguity around search creates an opening for grandiose claims and misinformation by unscrupulous profiteers claiming expertise.</p><p>If you’ve done SEO research, you may have come across an admixture of legitimate SEO practices, outdated optimizations, and misguided advice. You might have read that using the keyword meta tag in your HTML will help your SEO (<a href="https://webmasters.googleblog.com/2009/09/google-does-not-use-keywords-meta-tag.html">it won’t</a>), that there’s a specific number of instances a keyword should occur on a webpage (<a href="https://www.youtube.com/watch?v=Rk4qgQdp2UA">there isn’t</a>), or that buying links will improve your rankings (<a href="https://webmasters.googleblog.com/2013/02/a-reminder-about-selling-links.html">it likely won’t and will get the site penalized</a>). Let’s sift through the noise and highlight some dos and don’ts for performance-based SEO in 2018.</p>
    <div>
      <h3>SEO is dead, long live SEO!</h3>
      <a href="#seo-is-dead-long-live-seo">
        
      </a>
    </div>
    <p>Nearly every year since its inception, SEO is declared dead. It is true that the scope of best practices for search engines has narrowed over the years as search engines have become smarter, and much of the benefit from SEO can be experienced by following these two rules:</p><ol><li><p>Create good content</p></li><li><p>Don’t be creepy</p></li></ol><p>Beyond the fairly obvious, there are a number of tactics that can help improve the importance with which a website is evaluated inside Google, Bing and others. This blog will focus on optimizing for Google, though the principles and practices likely apply to all search engines.</p>
    <div>
      <h3>Does using Cloudflare hurt my SEO?</h3>
      <a href="#does-using-cloudflare-hurt-my-seo">
        
      </a>
    </div>
    <p>The short answer is, no. When asked whether or not Cloudflare can damage search rankings, John Mueller from Google stated <a href="https://twitter.com/JohnMu/status/862265871678529536">CDNs can work great for both users and search engines</a> when properly configured. This is consistent with our findings at Cloudflare, as we have millions of web properties, including SEO agencies, who use our service to improve both performance and SEO.</p>
    <div>
      <h3>Can load time affect a site's SEO ranking?</h3>
      <a href="#can-load-time-affect-a-sites-seo-ranking">
        
      </a>
    </div>
    <p>Yes, it can. Since at least 2010, Google has publicly stated that <a href="https://webmasters.googleblog.com/2010/04/using-site-speed-in-web-search-ranking.html">site speed affects your Google ranking</a>. While most sites at that time were not affected, times have changed and heavier sites with frontend frameworks, images, CMS platforms and/or a slew of other javascript dependencies are the new normal. Google promotes websites that result in a good user experience, and slow sites are frustrating and penalized in rankings as a result.</p><p>The cost of slow websites on user experience is particularly dramatic in mobile, where limited bandwidth results in further constraints. Aside from low search rankings, slow loading sites result in bad outcomes; research by Google indicates <a href="https://storage.googleapis.com/doubleclick-prod/documents/The_Need_for_Mobile_Speed_-_FINAL.pdf">53% of mobile sites are abandoned if load time is more than 3 seconds</a>. Separate research from Google using a deep <a href="https://www.cloudflare.com/learning/ai/what-is-neural-network/">neural network</a> found that as a mobile site’s <a href="https://www.thinkwithgoogle.com/marketing-resources/data-measurement/mobile-page-speed-new-industry-benchmarks/">load time goes from 1 to 7 seconds, the probability of a visitor bouncing increases 113%</a>. The problems surrounding page speed increase the longer a site takes to load; mobile sites that load in 5 seconds earn 2x more ad revenue than those that take 19 seconds to load (the average time to completely load a site on a 3G connection).</p>
    <div>
      <h3>What tools can I use to evaluate my site's performance?</h3>
      <a href="#what-tools-can-i-use-to-evaluate-my-sites-performance">
        
      </a>
    </div>
    <p>A number of free and verified tools are available for checking a website’s performance. Based on Google’s research, you can <a href="https://testmysite.thinkwithgoogle.com/">estimate the number of visitors you will lose</a> due to excessive loading time on mobile. Not to sound click baitey, but the results may surprise you.</p><p>As more web traffic continues to shift to mobile, mobile optimization must be prioritized for most websites. Google has announced that in July 2018 mobile speed will also affect SEO placement. If you want to do more research on your site’s overall mobile readiness, you can <a href="https://search.google.com/test/mobile-friendly">check to see if your site is mobile friendly</a>.</p><p>If you’re technically-minded and use Chrome, you can pop into the Chrome devtools and click on the audits tab to access Lighthouse, Chrome’s built in analysis tool.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3grz0h0TEpSW0jusWakOhH/a6f38a10bf755c7967da7863a2b09823/audits-tab-seo-screenshot.png" />
            
            </figure><p>Other key metrics used for judging your site's performance include FCP and DCL speeds. First Contentful Paint (FCP) measures the first moment content is loaded onto the screen of the user, answering the user’s question: “is this useful?”. The other metric, DOM Content Loaded (DCL), measures when all stylesheets have loaded and the DOM tree is able to be rendered. Google provides a tool for you to <a href="https://developers.google.com/speed/pagespeed/insights/">measure your website’s FCP and DCL speeds relative to other sites</a>.</p>
    <div>
      <h3>Can spammy websites hosted on the same platform hurt SEO?</h3>
      <a href="#can-spammy-websites-hosted-on-the-same-platform-hurt-seo">
        
      </a>
    </div>
    <p>Generally speaking, there is no cause for concern as shared hosts <a href="https://www.youtube.com/watch?v=AsSwqo16C8s">shouldn’t hurt your SEO</a>, even if some of the sites on the shared host are less reputable. In the unlikely event you find yourself as the only legitimate website on the host that is almost entirely spam, it might be time to rethink your hosting strategy.</p>
    <div>
      <h3>Does downtime hurt SEO?</h3>
      <a href="#does-downtime-hurt-seo">
        
      </a>
    </div>
    <p>If your site is down when it’s crawled, it may be <a href="http://www.thesempost.com/how-an-offline-website-impacts-google-rankings-seo/">temporarily pulled from results</a>. This is why service interruptions such as getting <a href="https://www.cloudflare.com/learning/ddos/what-is-a-ddos-attack/">DDoSed</a> during peak purchases times can be more damaging. Typically a site’s ranking will recover when it comes back online. If it’s down for an entire day it may take up to a few weeks to recover.</p>
    <div>
      <h3>Don’t be creepy in SEO: an incomplete guide</h3>
      <a href="#dont-be-creepy-in-seo-an-incomplete-guide">
        
      </a>
    </div>
    <p>Everybody likes to win, but playing outside the rules can have consequences. For websites that attempt to circumvent Google’s guidelines in an attempt to trick the search algorithms and web crawlers, a perilous future awaits. Here are a few things that you should make sure you avoid.</p><p><b>Permitting user-generated spam</b> - sometimes unmoderated comment sections run amok with user generated spam ads, complete with links to online pharmacies and other unrelated topics. Leaving these types of links in place lowers the quality of your content and may subject you to penalization. Having trouble handling a spam situation? There are <a href="https://support.google.com/webmasters/answer/81749">strategies you can implement</a>.</p><p><b>Link schemes</b> - while sharing links with reputable sources is still a legitimate tactic, excessively sharing links is not. Likewise, purchasing large bundles of links in an attempt to boost SEO by artificially passing PageRank is best avoided. There are many link schemes, and if you’re curious whether or not you’re in violation, look at <a href="https://support.google.com/webmasters/answer/66356">Google’s documentation</a>. If you feel like you might’ve made questionable link decisions in the past and you want to undo them, you can <a href="https://support.google.com/webmasters/answer/2648487">disavow links that point to your site</a>, but use this feature with extreme caution.</p><p><b>Doorway pages</b> - By creating many pages that optimize for specific search phrases, but ultimately point to the same page, some sites attempt to saturate all the search terms around a particular topic. While this might be tempting strategy to gain a lot of SEO very quickly, it may result in all pages losing rank.</p><p><b>Scraping content</b> - In an attempt to artificially build content, some websites will <a href="https://www.cloudflare.com/learning/ai/how-to-prevent-web-scraping/">scrape content</a> from other reputable sources and call it their own. Aside from the fact that this behavior can get a site flagged by the Panda algorithm for unrelated or excessive content, it is also in violation of the guidelines and can result in penalization or removal of a website from results.</p><p><b>Hidden text and links</b> - by hiding text inside a webpage so it’s not visible to users, some websites will try to artificially increment the amount of content they have on their site or the amount of instances a keyword occurs. Hiding text behind an image, setting a font size to zero, using CSS to position an element off of the screen, or the classic “white text on a white background” are all tactics to be avoided.</p><p><b>Sneaky redirects</b> - as the name implies, it’s possible to surreptitiously redirect users from the result that they were expecting onto something different. Split cases can also occur where a desktop version of the site will be directed to the intended page while the mobile will be forwarded to full-screen advertising.</p><p><b>Cloaking</b> - by attempting to show different content to search engines and users, some sites will attempt to circumvent the processes a search engine has in place to filter out low value content. While cloaking might have a cool name, it’s in violation and can result in rank reduction or listing removal.</p>
    <div>
      <h3>What SEO resources does Google provide?</h3>
      <a href="#what-seo-resources-does-google-provide">
        
      </a>
    </div>
    <p>There are number of sources that can be considered authoritative when it comes to Google SEO. John Mueller, Gary Illyes and (formerly) Matt Cutts, collectively represent a large portion of the official voice of Google search and provide much of the official SEO best practices content. Aside from the videos, blogs, office hours, and other content provided by these experts, Google also provides the <a href="https://webmasters.googleblog.com/">Google webmaster blog</a> and <a href="https://www.google.com/webmasters/tools/">Google search console</a> which house various resources and updates.</p><p>Last but not least, if you have web properties currently on Cloudflare there are <a href="https://support.cloudflare.com/hc/en-us/articles/231109348-How-do-I-Improve-SEO-Rankings-On-My-Website-Using-Cloudflare-">technical optimizations you can make to improve your SEO</a>.</p> ]]></content:encoded>
            <category><![CDATA[SEO]]></category>
            <category><![CDATA[Best Practices]]></category>
            <guid isPermaLink="false">PaB6LBBFido4JmgjNw29U</guid>
            <dc:creator>Matthew Williams</dc:creator>
        </item>
        <item>
            <title><![CDATA[Web Cache Deception Attack revisited]]></title>
            <link>https://blog.cloudflare.com/web-cache-deception-attack-revisited/</link>
            <pubDate>Fri, 19 Jan 2018 17:38:00 GMT</pubDate>
            <description><![CDATA[ In April, we wrote about Web Cache Deception attacks, and how our customers can avoid them using origin configuration.  Since our previous blog post, we have looked for but have not seen any large scale attacks like this in the wild. ]]></description>
            <content:encoded><![CDATA[ <p>In April, we wrote about <a href="/understanding-our-cache-and-the-web-cache-deception-attack/">Web Cache Deception attacks</a>, and how our customers can avoid them using origin configuration.</p><p>Read that blog post to learn about how to configure your website, and for those who are not able to do that, how to disable caching for certain URIs to prevent this type of attacks. Since our previous blog post, we have looked for but have not seen any large scale attacks like this in the wild.</p><p>Today, we have released a tool to help our customers make sure only assets that should be cached are being cached.</p>
    <div>
      <h3>A brief re-introduction to Web Cache Deception attack</h3>
      <a href="#a-brief-re-introduction-to-web-cache-deception-attack">
        
      </a>
    </div>
    <p>Recall that the Web Cache Deception attack happens when an attacker tricks a user into clicking a link in the format of <code>http://www.example.com/newsfeed/foo.jpg</code>, when <code>http://www.example.com/newsfeed</code> is the location of a dynamic script that returns different content for different users. For some website configurations (default in Apache but not in nginx), this would invoke <code>/newsfeed</code> with <a href="https://tools.ietf.org/html/rfc3875#section-4.1.5"><code>PATH_INFO</code></a> set to <code>/foo.jpg</code>. If <code>http://www.example.com/newsfeed/foo.jpg</code> does not return the proper <code>Cache-Control</code> headers to tell a web cache not to cache the content, web caches may decide to cache the result based on the extension of the URL. The attacker can then visit the same URL and retrieve the cached content of a private page.</p><p>The proper fix for this is to configure your website to either reject requests with the extra <code>PATH_INFO</code> or to return the proper <code>Cache-Control</code> header. Sometimes our customers are not able to do that (maybe the website is running third-party software that they do not fully control), and they can apply a Bypass Cache Page Rule for those script locations.</p>
    <div>
      <h3>Cache Deception Armor</h3>
      <a href="#cache-deception-armor">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/KOEBIR0VfOtdSOjvueaLv/7898ccfe2589400017b74e2a928d4e20/photo-1460194436988-671f763436b7" />
            
            </figure><p>Photo by <a href="https://unsplash.com/@enzo74?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Henry Hustava</a> / <a href="https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Unsplash</a></p><p>The new Cache Deception Armor Page Rule protects customers from Web Cache Deception attacks while still allowing static assets to be cached. It verifies that the URL's extension matches the returned <code>Content-Type</code>. In the above example, if <code>http://www.example.com/newsfeed</code> is a script that outputs a web page, the <code>Content-Type</code> is <code>text/html</code>. On the other hand, <code>http://www.example.com/newsfeed/foo.jpg</code> is expected to have <code>image/jpeg</code> as <code>Content-Type</code>. When we see a mismatch that could result in a Web Cache Deception attack, we will not cache the response.</p><p>There are some exceptions to this. For example if the returned <code>Content-Type</code> is <code>application/octet-stream</code> we don't care what the extension is, because that's typically a signal to instruct the browser to save the asset instead of to display it. We also allow <code>.jpg</code> to be served as <code>image/webp</code> or <code>.gif</code> as <code>video/webm</code> and other cases that we think are unlikely to be attacks.</p><p>This new Page Rule depends upon <a href="https://support.cloudflare.com/hc/en-us/articles/115003206852s">Origin Cache Control</a>. A <code>Cache-Control</code> header from the origin or Edge Cache TTL Page Rule will override this protection.</p> ]]></content:encoded>
            <category><![CDATA[Attacks]]></category>
            <category><![CDATA[Page Rules]]></category>
            <category><![CDATA[Vulnerabilities]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Best Practices]]></category>
            <guid isPermaLink="false">f3uB5nHEVeLV7BA4Q2wzt</guid>
            <dc:creator>Ka-Hing Cheung</dc:creator>
        </item>
        <item>
            <title><![CDATA[Simple Cyber Security Tips (for your Parents)]]></title>
            <link>https://blog.cloudflare.com/cyber-security-advice-for-your-parents/</link>
            <pubDate>Mon, 25 Dec 2017 15:32:37 GMT</pubDate>
            <description><![CDATA[ Today, December 25th, Cloudflare offices around the world are taking a break. From San Francisco to London and Singapore; engineers have retreated home for the holidays (albeit with those engineers on-call closely monitoring their mobile phones). ]]></description>
            <content:encoded><![CDATA[ <p>Today, December 25th, Cloudflare offices around the world are taking a break. From San Francisco to London and Singapore; engineers have retreated home for the holidays (albeit with those engineers on-call closely monitoring their mobile phones).</p><blockquote><p>Software engineering pro-tip:</p><p>Do not, I repeat, do not deploy this week. That is how you end up debugging a critical issue from your parent's wifi in your old bedroom while your spouse hates you for abandoning them with your racist uncle.</p><p>— Chris Albon (@chrisalbon) <a href="https://twitter.com/chrisalbon/status/943342608742604801?ref_src=twsrc%5Etfw">December 20, 2017</a></p></blockquote><p>Whilst our Support and SRE teams operated on a schedule to ensure fingers were on keyboards; on Saturday, I headed out of the London bound for the Warwickshire countryside. Away from the barracks of the London tech scene, it didn't take long for the following conversation to happen:</p><ul><li><p>Family member: "So what do you do nowadays?"</p></li><li><p>Me: "I work in Cyber Security."</p></li><li><p>Family member: "There seems to be a new cyber attack every day on the news! What can I possibly do to keep myself safe?"</p></li></ul><p>If you work in the tech industry, you may find a family member asking you for advice on <a href="https://www.cloudflare.com/learning/security/what-is-cyber-security/">cybersecurity</a>. This blog post will hopefully save you from stuttering whilst trying to formulate advice (like I did).</p>
    <div>
      <h3>The Basics</h3>
      <a href="#the-basics">
        
      </a>
    </div>
    <p>The WannaCry Ransomware Attack was one of the most high-profile cyberattacks of 2017. In essence, ransomware works by infecting a computer, then encrypting files - preventing users from being able to access them. Users then see a window on their screen demanding payment with the promise of decrypting files. Multiple copycat viruses also sprung up, using the same exploit as WannaCry.</p><p>It is worth noting that even after paying, you're unlikely to see your files back (don't expect an honest transaction from criminals).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4lRcjVSkC9AzNgSL3acz7W/9f9db36a007f34dfd1a6e8dc360a62f4/Wana_Decrypt0r_screenshot.png" />
            
            </figure><p>WannaCry was estimated to have infected over 300 000 computers around the world; this included high-profile government agencies and corporations, the UK's National Health Service was one notable instance of this.</p><p>Despite the wide-ranging impact of this attack, a lot of victims could have protected themselves fairly easily. Security patches had already been available to fix the bug that allowed this attack to happen and installing anti-virus software could have contained the spread of this ransomware.</p><p>For consumers; it is generally a good idea to install updates, particularly security updates. Platforms like Windows XP no longer receive security updates, and therefore shouldn't be used - regardless of how up-to-date they are on security patches.</p><p>Of course, it is also essential to back-up your most indispensable files, not just because of the damage security vulnerabilities.</p>
    <div>
      <h3>Don't put your eggs in one Basket</h3>
      <a href="#dont-put-your-eggs-in-one-basket">
        
      </a>
    </div>
    <p>It may not be Easter, but you certainly should not be putting all your eggs in one basket. For this reason; it is often not a good idea to use the same password across multiple sites.</p><p>Passwords have been around since 1961, however, no alternative has been found which keeps all their benefits; users continue to set passwords weakly, and website developers continue to store them insecurely.</p><p>When developers store computer passwords, they should do so in a way that they can check a computer password is correct but they can never know what the original password is. Unfortunately many websites (including some popular ones) implement internet security poorly. When they get hacked, a password dump can be leaked with everyone's emails/usernames alongside their passwords.</p><p>If the same email/username and password combination are used on multiple sites, hackers can automatically use the breached user data from one site to attempt logins against other websites you use online.</p><p>For this reason, it's absolutely critical to use a unique password across multiple sites. Password manager apps like <a href="https://www.lastpass.com/">LastPass</a> or <a href="https://1password.com/">1Password</a> allow you to use unique randomly-generated passwords for each site but manage them from one encrypted wallet using a master password.</p><p>Simple passwords, based on personal information or using individual words in the dictionary, are far from safe too. Computers can repeatedly go through common passwords in order to crack them. Similarly, adding numbers and symbols (i.e. changing <i>password</i> to <i>p4$$w0rd</i>) will do little to help also.</p><p>When you have to choose a password you need to remember, you can create strong passwords from sentences. For example: <i>"At Christmas my dog stole 2 pairs of Doc Martens shoes!"</i> can become <i>ACmds2poDMs!</i> Passwords based on simple sentences can be long, but still easy to remember.</p><p>Another approach is to simply select four random dictionary words, for example: <i>WindySoapLongBoulevard</i>. (For obvious reasons, don't actually use that as your password.) Although this password uses solely letters, it is more secure than a shorter password that would also use numbers and symbols.</p>
    <div>
      <h3>Layering Security</h3>
      <a href="#layering-security">
        
      </a>
    </div>
    <p>Authentication is how computers confirm you are who you say you are. Fundamentally, is done using either:</p><ul><li><p>Something you know</p></li><li><p>Something you have</p></li><li><p>Something you are</p></li></ul><p>A password is an example of how you can log-in using "something you <i>know</i>"; if someone is able to gain access to that password, it's game-over for that online account.</p><p>Instead, it is possible to use "something you <i>have</i>" as well. This means, should your password be intercepted or disclosed, you still have another safeguard to protect your account.</p><p>In practice, this means that after entering your password onto a website, you may also be prompted for another code that you need to read off an app on your phone. This is known as Two-Factor Authentication.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7fn1UkBgYFczT7NTNiKRZO/90d3170bf9d14915912e442608c1dca8/Screen-Shot-2017-12-25-at-14.00.33.png" />
            
            </figure><p><a href="https://support.cloudflare.com/hc/en-us/articles/200167906-Securing-user-access-with-two-factor-authentication-2FA-">Two-Factor Authentication</a> is supported on many of the worlds most popular social media, banking and shopping sites.</p>
    <div>
      <h3>Know who you talk to</h3>
      <a href="#know-who-you-talk-to">
        
      </a>
    </div>
    <p>When you browse to a website online, you may notice a lock symbol light up in your address bar. This indicates encryption is enabled when talking to the website, this is important in order to prevent interception.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6euuW53rJuMZCoDqiMMpFs/5b5e4aa6041ae66cac81c87acd542634/green_lock.png" />
            
            </figure><p>When inputting personal information into websites, it is important you check this green lock appears and that the website address starts with "<i>http</i><b><i>s</i></b><i>://</i>".</p><p>It is, however, important to double check the address bar you're putting your personal information into. Is it <i>cloudflare.com</i> or have you been redirected away to a dodgy website at <i>cloudflair.com</i> or <i>cloudflare.net</i>?</p><p>Despite how common encrypted web traffic has become, on many sites, it still remains relatively easy to strip away this encryption - by pointing internet traffic to a different address. I describe how this can be done in:<a href="/performing-preventing-ssl-stripping-a-plain-english-primer/">Performing &amp; Preventing SSL Stripping: A Plain-English Primer</a></p><p>It is also often good guidance to be careful about the links you see in emails; they legitimate emails from your bank, or just someone trying to capture your personal information from their fake "phishing" website that looks just like your bank? Just because someone has a little bit of information about you, doesn't mean they are who they say they are. When in doubt; void following links directly in emails, and check the validity of the email independently (such as by directly going to your banking website). A correct looking "to" address isn't enough to prove an email is coming from who it says it's from.</p>
    <div>
      <h3>Conclusions</h3>
      <a href="#conclusions">
        
      </a>
    </div>
    <p>We always hear of new and innovative security vulnerabilities, but for most users, remembering a handful of simple security tips is enough to protect against the majority of security threats.</p>
    <div>
      <h4>In Summary:</h4>
      <a href="#in-summary">
        
      </a>
    </div>
    <ul><li><p>As a rule-of-thumb, install the latest security patches</p></li><li><p>Don't use obsolete software which doesn't provide security patches</p></li><li><p>Use well-trusted anti-virus</p></li><li><p>Back-up the files and folders you can't expect to lose</p></li><li><p>Use a Password Manager to set random, unique passwords for every site</p></li><li><p>Don't use common keywords or personal information as passwords</p></li><li><p>Adding numbers and symbols to passwords often doesn't add security but impacts how easy they are to remember</p></li><li><p>Enable Two-Factor Authentication on sites which support it</p></li><li><p>Check the address bar when inputting personal information, make sure the connection is encrypted and the site address is correct</p></li><li><p>Don't believe everything you see in your email inbox or trust every link sent through email; even if the sender has some information about you.</p></li></ul><p>Finally, from everyone at Cloudflare, we wish you a wonderful and safe holiday season. For further reading, check out the <a href="/imdb-2017/">Internet Mince Pie Database</a>.</p> ]]></content:encoded>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Best Practices]]></category>
            <category><![CDATA[HTTPS]]></category>
            <guid isPermaLink="false">3mw7QGYcjAUR8wP9W3N8TJ</guid>
            <dc:creator>Junade Ali</dc:creator>
        </item>
        <item>
            <title><![CDATA[5 Strategies to Promote Your App]]></title>
            <link>https://blog.cloudflare.com/5-strategies-to-best-promote-your-app/</link>
            <pubDate>Fri, 27 Oct 2017 17:30:00 GMT</pubDate>
            <description><![CDATA[ Brady Gentile from Cloudflare's product team wrote an App Developer Playbook, embedded within the developer documentation page.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Brady Gentile from Cloudflare's product team wrote an <a href="https://www.cloudflare.com/apps/assets/Cloudflare%20Apps%20Developer%20Playbook.pdf">App Developer Playbook</a>, embedded within the developer documentation <a href="https://www.cloudflare.com/apps/developer/docs/getting-started">page</a>. He decided to write it after he and his team conducted several app developer interviews, finding that many developers wanted to learn how to better promote their apps.</p><p>They wanted to help app authors out in the areas outside of developer core expertise. Social media posting, community outreach, email deployment, SEO, blog posting and syndication, etc. can be daunting.</p><p>I wanted to take a moment to highlight some of the tips from the App Developer Playbook because I think Brady did a great job of providing clear ways to approach promotional strategies.</p>
    <div>
      <h3>5 Promotional Strategies</h3>
      <a href="#5-promotional-strategies">
        
      </a>
    </div>
    <hr />
    <div>
      <h4>1. Share with online communities</h4>
      <a href="#1-share-with-online-communities">
        
      </a>
    </div>
    <p>Your app’s potential audience likely reads community-aggregated news sites such as <a href="https://news.ycombinator.com/">HackerNews</a>, <a href="https://www.producthunt.com/">Product Hunt</a>, or <a href="https://www.reddit.com/">reddit</a>. Sharing your app across these websites is a great way for users to find your app.</p>
            <figure>
            <a href="https://news.ycombinator.com/">
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6XnoCKWlUMSfUxDVJvIF5q/e325d5efba8522fb30fcf0db285a0feb/hacker-news.jpg" />
            </a>
            </figure><p>For apps that are interesting to developers, designers, scientists, entrepreneurs, etc., be sure to share your work with the Hacker News community. Be sure to follow the <a href="https://news.ycombinator.com/newsguidelines.html">official guidelines</a> when posting and when engaging with the community. It may be tempting to ask your friends to upvote you, but honesty is the best policy, and the vote-ring detector will bury your post if you try to game it. Instead, if you don’t frontpage on the first try, consider re-posting on another day, with any of these options: the frontpage of your site, the blog post about the launch of your app, a demo of your app in action, a github repo. It may be worth taking into consideration the rate at which new posts are being added to /newest per minute or per hour, which impacts the likelihood of your post making it to the frontpage.</p><p>Since you’re sharing a project that people can play with, be sure to: 1) use “Show HN” and follow <a href="https://news.ycombinator.com/showhn.html">Show HN guidelines</a>, and 2) be available to answer questions in the comments.</p><p>Be sure to start your title with the words ‘Show HN:’ (this indicates that you’ll be sharing something interesting that you’ve built with the HN community with a live demo people can try), then briefly explain your app within the same field. Rather than just use the name of your app, consider adding something informative, like the short description you use in your Cloudflare Apps marketplace tile. For instance, “Show HN: Trebble (embed voice and music on your site)” is more informative than “Show HN: Trebble” as a post title. Next, you’ll have the option of either submitting the URL of your app or explaining a little bit about yourself, the app, and pasting a link to the app itself.</p><p>Lastly, you should probably take the time to explain yourself and what you're all about in a first comment, as it helps build good rapport with the community. Block off some time on your calendar so you’re available to answer questions and engage with the community for however long your post is on the frontpage. In addition to gathering their valuable feedback, a signal that the app author is there (“Hi, I’m <b>Name</b> and I made this app to solve <b>this problem</b> -- I’d love to get your feedback.”) will often make your project more approachable and put a face on a product.</p>
            <figure>
            <a href="https://www.producthunt.com/">
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5cBCwVUlhNEO4roRrLRCgH/4c8694efed570d1281707ecf6e2d177b/Product-Hunt.svg" />
            </a>
            </figure><p>Product Hunt has released <a href="https://blog.producthunt.com/how-to-launch-on-product-hunt-7c1843e06399">a blog post</a> which outlines how to properly submit your app or product to their community. I highly recommended you review this post in its entirety prior to launching your Cloudflare App.</p>
            <figure>
            <a href="https://www.reddit.com/">
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/36KOnCDO8hM5Qvg7RsOFNi/dbf986230b19b64e353103c966e7a0d7/Reddit.png" />
            </a>
            </figure><p>Submit a link to your app, along with some screenshots/videos, and a descriptive title for your post, and select a subreddit to post into. For the title of your post, you’ll want to use something descriptive about your app; for example you could say “I just built an app that does [X].”</p><p>If your app isn't relevant to the subreddit in which you post, it'll likely be removed by a moderator, so think carefully about which subreddits would find your app genuinely useful. I also recommend you take some time to engage with that community prior to posting your app, in part because their feedback is valuable, and in part so that you’re not a stranger. Here are two subreddits you should definitely include, though: <a href="https://www.reddit.com/r/apps/">Apps</a>, <a href="https://www.reddit.com/r/Cloudflare/">Cloudflare</a>.</p>
    <div>
      <h4>2. Optimize your app for discoverability</h4>
      <a href="#2-optimize-your-app-for-discoverability">
        
      </a>
    </div>
    <p>One of the most important steps of the Cloudflare app deployment process is ensuring that both visitors browsing <a href="https://www.cloudflare.com/apps/">Cloudflare Apps</a> and anyone doing a search on the web may quickly and easily find your app. By optimizing your Cloudflare app for discoverability, you’ll receive a greater number of views, installations, and revenue.</p>
    <div>
      <h4>Title and description</h4>
      <a href="#title-and-description">
        
      </a>
    </div>
    <p>Your app’s title and short description are the first thing millions of website owners are going to see when coming across your Cloudflare app, whether it’s through browsing <a href="https://www.cloudflare.com/apps/">Cloudflare Apps</a> or on a search engine. It’s important that an app’s title is unique, descriptive, and identifiable.</p><p><a href="https://www.cloudflare.com/apps/noadblock">NoAdBlock</a> is a great example.</p>
            <figure>
            <a href="https://www.cloudflare.com/apps/noadblock">
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1tJau7uPvNc7W1WRxLLDRL/1392d227dd674c452a7737782a869c16/NoAdBlock-example.png" />
            </a>
            </figure>
    <div>
      <h4>Screenshots</h4>
      <a href="#screenshots">
        
      </a>
    </div>
    <p>Showcasing how your app might appear on a user’s website gives confidence to users thinking about previewing and installing. Include a variety of screenshots, showing multiple ways in which the software can be configured on a user’s website.</p><p>Read more about how to configure your full app description and categories in the <a href="https://www.cloudflare.com/apps/assets/Cloudflare%20Apps%20Developer%20Playbook.pdf">App Developer Playbook</a>.</p>
    <div>
      <h4>3. Promote through your properties</h4>
      <a href="#3-promote-through-your-properties">
        
      </a>
    </div>
    <p>Once your app has launched on <a href="https://www.cloudflare.com/apps/">Cloudflare Apps</a>, it’s important that users are able to envision how your app will work for them and that they're easily able to use it.</p>
    <div>
      <h4>Building an app preview link</h4>
      <a href="#building-an-app-preview-link">
        
      </a>
    </div>
    <p>Preview links allow you to generate a link to the install page for your app, which includes customization options for users to play around with.</p><p>Check out this preview for the <a href="https://www.cloudflare.com/apps/spotify/install">Spotify app</a>:</p>
            <figure>
            <a href="https://www.cloudflare.com/apps/spotify/install">
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/49j9hnvytMvtqocDsKhsRC/007beca2030f9ef655347a69c88d9bc2/Screen-Recording-2017-10-20-at-12.01-PM.gif" />
            </a>
            </figure>
    <div>
      <h4>Install badge and placement</h4>
      <a href="#install-badge-and-placement">
        
      </a>
    </div>
    <p>Make it easy and obvious for users. The Cloudflare Install Button is an interactive badge which can be embedded in any online assets, including websites and emails.</p><p>To use the full Cloudflare App install badge, you can paste the code listed in the Playbook onto your website or marketing page.You just need to replace [ appTitle ] and [ appId or appAlias ] with the appropriate details for your app. You can choose a standard button or customize it to your app.</p><p>Here's what <a href="https://www.cloudflare.com/apps/NoAdBlock">NoAdBlock</a> used:</p>
            <figure>
            <a href="https://www.cloudflare.com/apps/NoAdBlock">
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5orFHu4BRSHz2ll6ixfEWP/4d8d7816371bf65addae801c78fb24df/Install-button.png" />
            </a>
            </figure>
    <div>
      <h4>4. Spread word to existing users</h4>
      <a href="#4-spread-word-to-existing-users">
        
      </a>
    </div>
    <p>A quick and easy way to announce your app’s availability is to notify your user base that the app is now available for them to preview and install. Read more in the Playbook on how to grow your user base before the launch.</p><p>Here's a good starting template for an email announcement:</p>
            <figure>
            <a href="https://www.cloudflare.com/apps/assets/Cloudflare%20Apps%20Developer%20Playbook.pdf">
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1DshQbe5MzECR3AGMlSdJZ/aedcbdcc12f848d65894915427ed10c2/Email-Template.png" />
            </a>
            </figure>
    <div>
      <h4>5. Form a presence on social media</h4>
      <a href="#5-form-a-presence-on-social-media">
        
      </a>
    </div>
    <p>Targeting users across multiple channels is a pretty easy way to ensure that website owners know your app is now available. Cloudflare can help you with this. Tag @Cloudflare in your posts, so your post will be retweeted and reshared.</p>
            <figure>
            <a href="https://www.cloudflare.com/apps/assets/Cloudflare%20Apps%20Developer%20Playbook.pdf">
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7hNt3pTVZmStDmaMwVWqS4/8ef2fa8195208cf0797417cd43788510/Twitter-Announcementsw.png" />
            </a>
            </figure>
            <figure>
            <a href="https://www.cloudflare.com/apps/assets/Cloudflare%20Apps%20Developer%20Playbook.pdf">
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1yOVoMAV90FDsbGPbNwhV3/7ea04b0c043bad5e3b10e85994c22abd/Facebook-Announcements.png" />
            </a>
            </figure>
    <div>
      <h4>Blog Stuff</h4>
      <a href="#blog-stuff">
        
      </a>
    </div>
    <p>Another way to promote the release of your app is by writing a blog post (or several) on your app’s website, delving into the features and benefits that your app brings to users. In addition to your launch post, you can enumerate the new features and bug fixes in a new and improved release, highlight different use cases from your own user base, or deep dive into a fascinating aspect of how you implemented your app.</p><p>Here's a <a href="https://blog.getadmiral.com/admiral-launches-adblock-solution-for-cloudflare-publishers/">well-written launch blog post</a>, from the makers of <a href="https://www.cloudflare.com/apps/Admiral">Admiral</a>.</p><p>Other blogs may help you with this as well. Syndication is a great way to gain significant exposure for your posts. Brainstorm a list of blogs facing the core audience for your app, and reach out and ask if you can contribute a guest blog post. If developers are the core audience, drop a line to <a href="#">community@cloudflare.com</a>. I’d love to have a conversation about whether a guest post featuring your app would be right for the Cloudflare blog.</p><hr /><p>Again, this is just a glimpse into the guidance that the <a href="https://www.cloudflare.com/apps/assets/Cloudflare%20Apps%20Developer%20Playbook.pdf">App Developer Playbook</a> provides. Check it out and share it with your community of app developers.</p><p>Happy, productive app launching to you!</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Apps]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Best Practices]]></category>
            <category><![CDATA[SEO]]></category>
            <category><![CDATA[Community]]></category>
            <guid isPermaLink="false">1XiY3FmKSA7yqFwNVlB0pS</guid>
            <dc:creator>Andrew Fitch</dc:creator>
        </item>
        <item>
            <title><![CDATA[Spotify's Cloudflare App is open source: fork it for your next project]]></title>
            <link>https://blog.cloudflare.com/spotify-highlight/</link>
            <pubDate>Wed, 25 Oct 2017 17:00:00 GMT</pubDate>
            <description><![CDATA[ Earlier this year, Cloudflare Apps was launched so app developers may leverage our global network of 6 million+ websites, applications, and APIs.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Earlier this year, <a href="https://www.cloudflare.com/apps/">Cloudflare Apps</a> was launched so app developers may leverage our global network of 6 million+ websites, applications, and APIs. I’d like to take a moment to highlight Spotify, which was a launch partner for Cloudflare Apps, especially since they have elected to open source the code to their Cloudflare App.</p><p><a href="https://github.com/CloudflareApps/Spotify">Spotify Github repo »</a></p>
    <div>
      <h4>About Spotify</h4>
      <a href="#about-spotify">
        
      </a>
    </div>
    <p>Spotify is the leading digital service for streaming music, serving more than 140 million listeners.</p>
    <div>
      <h4>What does the Spotify app do?</h4>
      <a href="#what-does-the-spotify-app-do">
        
      </a>
    </div>
    <p>Recently, Spotify launched a Cloudflare App to instantly and easily embed the Spotify player onto your website without having to copy / paste anything.</p>
            <figure>
            <a href="https://www.cloudflare.com/apps/spotify/install">
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4kB6BTV56o10E7upCD5D8S/c9de2e2e016a287ce71a2ccab99524b1/Screen-Shot-2017-10-20-at-11.46.19-AM.png" />
            </a>
            </figure>
    <div>
      <h4>Who should install the Spotify app?</h4>
      <a href="#who-should-install-the-spotify-app">
        
      </a>
    </div>
    <p>A musician who runs a site for their band - they can now play samples of new tracks on their tour calendar page and psych up their fans.</p><p>A game creator who wants to share their game's soundtrack with their fans.</p><p>An activewear company which wants to deliver popular running playlists to its customers.</p><p>Web properties that install the Spotify app have the ability to increase user engagement.</p><p>Add Spotify widgets to your web pages and let your users play tracks and follow Spotify profiles. Add a Spotify Play Button to your blog, website, or social page; all your fans have to do is hit “Play” to enjoy the music. You can create Play Buttons for albums, tracks, artists, or playlists.</p>
    <div>
      <h4>How it works for the user</h4>
      <a href="#how-it-works-for-the-user">
        
      </a>
    </div>
    <p>When a logged-in Spotify user clicks the button on your page, the music will start playing in the Spotify player. If the user isn’t logged into their Spotify account, the play button will play a 30-second audio preview of the music and they will be prompted to login or sign up.</p>
    <div>
      <h4>How it works for the website owner</h4>
      <a href="#how-it-works-for-the-website-owner">
        
      </a>
    </div>
    <p>You can customize your button as well as link to any song or album you prefer in Spotify’s music catalog or to a playlist you’ve generated. Take a look at the preview of how the Spotify app would appear on a site here:</p>
            <figure>
            <a href="https://www.cloudflare.com/apps/spotify/install">
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3qwZeukPwV918vmYvKAuOB/e6a837e65a54f68691b09d89d43ee5f8/Screen-Recording-2017-10-20-at-12.01-PM.gif" />
            </a>
            </figure><p>The Cloudflare App creator allows you to preview the app on your site without making any changes to your code.</p><p>In the left pane, you can see the install options where you can select what kind of widget you’d like displayed: a playlist, a track, or a follow button. You can customize the size, theme and position of the banner on your site. The “Pick a location” tool uses CSS selectors to allow you to pinpoint the location on your site where it’s displayed.</p><p>In the right pane, you can preview your choices, seeing what they’d look like on your website and experiment with placement and how it flows with the site. This is very similar to the tool that the app developer uses to test the app for how it behaves on a wide range of web properties.</p><p><a href="https://www.cloudflare.com/apps/spotify/install">Play with the Spotify Preview now »</a></p>
    <div>
      <h4>Fork this App</h4>
      <a href="#fork-this-app">
        
      </a>
    </div>
    <p>Our friends at Spotify made their code available on GitHub. You can clone and fork the repository <a href="https://github.com/CloudflareApps/Spotify">here</a>. It’s a great way to get some practice developing Cloudflare Apps and to start with some basic scaffolding for your app.</p><p>Check out the documentation for Cloudflare Apps <a href="https://www.cloudflare.com/apps/developer/docs/getting-started">here</a>.</p><p>Check out Cloudflare’s new App Developer Playbook, a step-by-step marketing guide for Cloudflare app developers <a href="https://www.cloudflare.com/apps/assets/Cloudflare%20Apps%20Developer%20Playbook.pdf">here</a>.</p> ]]></content:encoded>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Best Practices]]></category>
            <category><![CDATA[Cloudflare Apps]]></category>
            <guid isPermaLink="false">5gbSGQSwIjDSHNUWvUmG8m</guid>
            <dc:creator>Andrew Fitch</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing the new Cloudflare Community Forum]]></title>
            <link>https://blog.cloudflare.com/cloudflare-community/</link>
            <pubDate>Wed, 03 May 2017 21:02:29 GMT</pubDate>
            <description><![CDATA[ Cloudflare’s community of users is vast. With more than 6 million domains registered, our users come in all shapes and sizes and are located all over the world.  ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare’s community of users is vast. With <a href="https://www.cloudflare.com/products/registrar/">more than 6 million domains registered</a>, our users come in all shapes and sizes and are located all over the world. They can also frequently be found hanging out all around the web, from <a href="https://twitter.com/search?f=tweets&amp;vertical=default&amp;q=cloudflare&amp;src=tyah">social media platforms</a>, to <a href="http://stackoverflow.com/search?q=cloudflare">Q&amp;A sites</a>, to any number of personal interest forums. Cloudflare users have questions to ask and an awful lot of expertise to share. </p><p>It’s with that in mind that we wanted to give Cloudflare users a more centralized location to gather, and to discuss all things Cloudflare. So we have launched a new Cloudflare Community at <a href="https://community.cloudflare.com/">community.cloudflare.com</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3bFQzPdIV1G41WpVUY645h/316d4e89ab2315fa4fa72dc75383d93a/Screen-Shot-2017-05-01-at-1.09.27-PM.png" />
            
            </figure>
    <div>
      <h3>Who is this community for?</h3>
      <a href="#who-is-this-community-for">
        
      </a>
    </div>
    <p>It's for anyone and everyone who uses Cloudflare. Whether you are adding your first domain and don’t know what a name server is, or you are managing 1,000s of domains via API, or you are somewhere in between. In the Cloudflare Community you will be able to find tips, tricks, troubleshooting guidance, and recommendations.</p><p>We also think this will be a great way to get feedback from users on what’s working for them, what isn’t, and ways that we can make Cloudflare better. There will even be opportunities to participate in early access programs for new and evolving features.</p>
    <div>
      <h3>How do I access it?</h3>
      <a href="#how-do-i-access-it">
        
      </a>
    </div>
    <p>Anyone can visit <a href="https://community.cloudflare.com/">community.cloudflare.com</a> and look around, soaking in all the information and expertise available, but in order to post questions or comments you must have a Cloudflare account. We wanted to keep this forum focused on using and improving Cloudflare, and not about questions from people who visit sites that use Cloudflare services. When users visit the forum and try to sign-in for the first time they will go through the usual Cloudflare login process, but there will be an added step for email verification. Once that is done users are good to go.</p>
    <div>
      <h3>How to use the forum</h3>
      <a href="#how-to-use-the-forum">
        
      </a>
    </div>
    <p>We’ve started off with some broad categories like <a href="https://community.cloudflare.com/c/performance">Performance</a>, <a href="https://community.cloudflare.com/c/security">Security</a>, <a href="https://community.cloudflare.com/c/prodfeedback">Product Feedback</a>, etc., and created some tags to cover some of the more specific products and topics. In time, we could expand to more dedicated categories around things like SSL or <a href="https://www.cloudflare.com/learning/ddos/glossary/web-application-firewall-waf/">WAF</a>, but we didn’t want to separate things off too much up front. There is also a <a href="https://community.cloudflare.com/c/meta">Meta</a> category where members can direct questions or suggestions about the Community. So, just put your topic in the area that you think is best, throw on a tag or two if necessary, and we’ll all figure this out together. But don’t forget to search and see if someone’s already discussing it.</p><p>We aren’t degrading our presence on social media or other popular discussion sites. But we are hopeful that having a centralized location will make it even easier for people who want to get together and discuss their Cloudflare experiences to find the conversations they are looking for. So be sure to visit <a href="https://community.cloudflare.com/">community.cloudflare.com</a> and say hello!</p> ]]></content:encoded>
            <category><![CDATA[Community]]></category>
            <category><![CDATA[Support]]></category>
            <category><![CDATA[Life at Cloudflare]]></category>
            <category><![CDATA[Best Practices]]></category>
            <guid isPermaLink="false">1CP5IuSbeA9grRSU4Bvj6h</guid>
            <dc:creator>Ryan Knight</dc:creator>
        </item>
        <item>
            <title><![CDATA[Ecommerce websites on Cloudflare: best practices]]></title>
            <link>https://blog.cloudflare.com/ecommerce-best-practices/</link>
            <pubDate>Tue, 25 Apr 2017 07:45:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare provides numerous benefits to ecommerce sites, including advanced DDOS protection and an industry-leading Web Application Firewall (WAF) that helps secure your transactions and protect customers’ private data. ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare provides numerous benefits to <a href="https://www.cloudflare.com/ecommerce/">ecommerce sites</a>, including advanced DDOS protection and an industry-leading <a href="https://www.cloudflare.com/learning/ddos/glossary/web-application-firewall-waf/">Web Application Firewall (WAF)</a> that helps secure your transactions and protect customers’ private data.</p><p>A key Cloudflare feature is caching, which allows content to be served closer to the end user from our global network of data centers. Doing so improves the user's shopping experience and contributes to increasing the proportion of people completing a purchase (conversion rate).</p><p>For example:</p><ul><li><p>Walmart found improving page load time by 1 second increased their conversion rate by 2%</p></li><li><p>Research for Amazon showed every 0.1 second of delay costs 1% of sales</p></li><li><p>The Barack Obama campaign website saw an 80% page load time boost resulted in a 14% increase in donations</p></li></ul>
    <div>
      <h3>What is caching?</h3>
      <a href="#what-is-caching">
        
      </a>
    </div>
    <p>Cloudflare <a href="https://www.cloudflare.com/network/">operates over 110 data centers around the world</a>. When a website implements Cloudflare, visitor requests for the site will proxy through the nearest Cloudflare data center instead of connecting directly to the webserver hosting the site (origin). This means Cloudflare can store content such as images, JavaScript, CSS and HTML on our servers, speeding up access to those resources for end-users.</p><p>Most ecommerce websites rely on a backend database containing product descriptions and metadata such as prices. Without caching, each visit to a product page might involve several database requests to pull all the required data, which can introduce added latency to page load time, particularly on a busy website. Serving the website's homepage and product pages from Cloudflare's cache not only eliminates these costly database calls, but also reduces the load on your origin infrastructure.</p><p>To make the most of Cloudflare and to help maximize the speed of your website, serve as much content as possible from the Cloudflare cache.</p>
    <div>
      <h3>How Cloudflare caching works</h3>
      <a href="#how-cloudflare-caching-works">
        
      </a>
    </div>
    <p>By default, Cloudflare caches static content based on a <a href="https://support.cloudflare.com/hc/en-us/articles/200172516-Which-file-extensions-does-CloudFlare-cache-for-static-content-">fixed list of file extensions</a> which includes assets such as images, CSS files and PDFs.</p><p>The reason Cloudflare only caches static content out of the box (and does not cache HTML content by default) is to avoid the risk of inappropriate data being cached. For example, if the shopping cart page is cached, then the next visitor might receive the cached version and see a cart with the incorrect contents. Therefore, while enabling more caching will let you make the most of Cloudflare, it requires careful and considered implementation.</p><p>Additional caching on Cloudflare can be enabled in one of two ways - using Page Rules or by sending cache headers from your origin. These two methods are explained in more detail <a href="https://support.cloudflare.com/hc/en-us/articles/202775670-How-Do-I-Tell-CloudFlare-What-to-Cache-">here</a>. In this blog post we’ll use Page Rules, but keep in mind you can use headers from your origin too.</p>
    <div>
      <h3>Using caching on ecommerce sites</h3>
      <a href="#using-caching-on-ecommerce-sites">
        
      </a>
    </div>
    <p>A typical HTML page on an ecommerce website will contain static content (such as the product description) and dynamic content such as:</p><ul><li><p>a header section which varies according to the visitor’s logged in state - e.g. if the user is logged in, it may offer the user a “Logged in as..." message</p></li><li><p>a basket section which populates as the user shops on the site</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/693W6zmtTqjWk76TOCYqCB/ba2c2fb65d96f0b1985f6af9acbdf170/Cloudflare-Ecommerce-Best-Practices.png" />
            
            </figure><p>The user might have one or more session cookies to maintain these dynamic elements.</p><p>There are few ways to make the most of Cloudflare's caching, while taking into account the dynamic nature of ecommerce websites.</p>
    <div>
      <h3>Method 1: cache everything on Cloudflare but bypass the cache for private content</h3>
      <a href="#method-1-cache-everything-on-cloudflare-but-bypass-the-cache-for-private-content">
        
      </a>
    </div>
    <p><i>Note: the Bypass Cache on Cookie feature is only available on the Cloudflare Business and Enterprise </i><a href="https://www.cloudflare.com/plans/"><i>plans</i></a></p><p>Many visitors to a site will be brand new, first time visitors - in other words, they won’t be logged in to the site and won’t have any items in their basket.</p><p>Serving their request from the Cloudflare cache means they can quickly view the page they’re looking for (whether the homepage or a specific product page). As they're a brand new visitor, the entire page can be served from the Cloudflare cache.</p><p>With most ecommerce platforms, as soon as the user logs in to the site or adds an item to basket, a relevant cookie is sent to the browser.</p><p>Cloudflare can cache the pages, but will bypass the cache should Cloudflare receive either of the cookies from the browser.</p><p>This is achieved by introducing a <a href="https://support.cloudflare.com/hc/en-us/articles/200168306-Is-there-a-tutorial-for-Page-Rules-">Page Rule</a> with a “Bypass Cache on Cookie” setting:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2IMt5NxIQXyZGwv8JniGgH/fc7ddf4c27b19c984e804815bbcd0958/pagerulesettings.png" />
            
            </figure><p>In the above example, the Page Rule will cause all requests to the site to serve from cache, unless the web browser has sent a cookie named “loggedin” or “iteminbasket”.</p><p>Obviously every ecommerce platform is different, so always think through your settings and ensure you use the correct cookie values to ensure that there is no risk of private data (e.g. someone’s shopping basket) being served from cache and shown to another visitor.</p>
    <div>
      <h3>Method 2: Populating via JavaScript / AJAX</h3>
      <a href="#method-2-populating-via-javascript-ajax">
        
      </a>
    </div>
    <p>A better solution would be to serve the entirety of the page from cache, but populate the dynamic elements using JavaScript / AJAX.</p><p>This means Cloudflare will serve the bulk of the page content and only small requests will pass (via Cloudflare) direct to origin to populate dynamic elements such as the basket contents.</p><p>To configure this, use a Page Rule with Cache Level “Cache Everything” for the static content and another Page Rule with Cache Level “Bypass” for the dynamic (AJAX) requests.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4WRx5BjGEX5GQ6EUppvg3j/af57c919e57567ae1c6f933a177c7301/page_rules_to_bypass_cache_and_cache.png" />
            
            </figure><p>In this example, any requests going to <code>www.example.com/ajax/basket_contents.php</code> and <code>www.example.com/ajax/logged-in-state.php</code> would match the first Page Rule, which has cache level “Bypass” - Cloudflare will proxy the request but the request won't touch the Cloudflare cache.</p><p>Other requests, e.g. to <code>www.example.com/products/product_page</code> would not match the first Page Rule but would instead match the second “Cache Everything” Page Rule - thus the product page is served from the Cloudflare cache. Within that product page, the dynamic elements (such as the basket contents and the logged in state) are dynamically populated using the AJAX requests.</p><p>You should also consider introducing additional appropriate Page Rules for special pages such as the checkout pages - for example, you may wish to create a Page Rule that bypasses the cache for all the checkout pages.</p><p>Remember: only one Page Rule will execute for any given request, and Page Rules are processed in the order they exist in the Cloudflare control panel. Read over our <a href="https://support.cloudflare.com/hc/en-us/articles/200168306-Is-there-a-tutorial-for-Page-Rules-">Page Rules tutorial</a> to better understand how they work.</p>
    <div>
      <h3>Optimizing further: using Railgun</h3>
      <a href="#optimizing-further-using-railgun">
        
      </a>
    </div>
    <p><i>Note: the Railgun feature is only available on the Cloudflare Business and Enterprise </i><a href="https://www.cloudflare.com/plans/"><i>plans</i></a></p><p>Cloudflare’s Railgun technology optimizes the connection between Cloudflare and the website origin, for accelerating dynamic HTML content - content that can't be served from the Cloudflare cache.</p><p>Railgun helps in two ways:</p><ul><li><p>Establishing a persistent connection between Cloudflare and the website origin (to speed up initial connection times)</p></li><li><p>Compressing the data that passes from the origin to Cloudflare by only sending content that has changed</p></li></ul><p>Before Railgun:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4ooTWlxFNJ60jWwvzKEy1P/ee1a734c9c1021d84556e048558f6789/railgun-diagram-how-it-works-without.svg" />
            
            </figure><p>After Railgun:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4uX10OAM7zEqKm80XibrHk/f9d89979ed91184eec0bd20b7d04dbc2/railgun-diagram-how-it-works-with.svg" />
            
            </figure><p>Railgun can happily be used in conjunction with the previously discussed caching methods.</p><p>If you’ve implemented method 1 (the bypass cache on cookie method) then Railgun will accelerate the requests which pass directly to origin due to the presence of the relevant bypass cache cookies.</p><p>Method 2 (caching everything on Cloudflare except AJAX calls to populate dynamic sections) is already more efficient than method 1. Railgun can still be used to further accelerate the AJAX requests that pass from Cloudflare to origin.</p><p>Railgun is a little more advanced as it requires installation of a small software package on (or very close to) the origin webserver to handle the compression. You can <a href="https://www.cloudflare.com/website-optimization/railgun/">read more about Railgun here</a> and find the <a href="https://www.cloudflare.com/docs/railgun/">installation documentation here</a>.</p><p>Ideally a <a href="https://www.cloudflare.com/solutions/ecommerce/optimization/">well optimized ecommerce website</a> will leverage our caching service as much as possible - serving images, CSS and JavaScript from Cloudflare's network, in addition to as much static HTML content as possible. Adding our Railgun service to accelerate those inevitable non-cachable requests to the origin webserver will help create a fantastic, speedy shopping experience for your customers.</p> ]]></content:encoded>
            <category><![CDATA[eCommerce]]></category>
            <category><![CDATA[Page Rules]]></category>
            <category><![CDATA[Railgun]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[DDoS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[WAF]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Cache]]></category>
            <category><![CDATA[Best Practices]]></category>
            <guid isPermaLink="false">6pRC8uOyLFDKH8RZahQ2WD</guid>
            <dc:creator>Nick B</dc:creator>
        </item>
        <item>
            <title><![CDATA[Understanding Our Cache and the Web Cache Deception Attack]]></title>
            <link>https://blog.cloudflare.com/understanding-our-cache-and-the-web-cache-deception-attack/</link>
            <pubDate>Fri, 14 Apr 2017 15:00:00 GMT</pubDate>
            <description><![CDATA[ About a month ago, security researcher Omer Gil published the details of an attack that he calls the Web Cache Deception attack. It works against sites that sit behind a reverse proxy (like Cloudflare) and are misconfigured in a particular way. ]]></description>
            <content:encoded><![CDATA[ <p>About a month ago, security researcher <a href="https://twitter.com/omer_gil">Omer Gil</a> published <a href="https://omergil.blogspot.co.il/2017/02/web-cache-deception-attack.html">the details</a> of an attack that he calls the Web Cache Deception attack. It works against sites that sit behind a reverse proxy (like Cloudflare) and are misconfigured in a particular way. Unfortunately, the definition of "misconfigured" for the purposes of this attack changes depending on how the cache works. In this post, we're going to explain the attack and then describe the algorithm that our cache uses to decide whether or not to cache a given piece of content so that customers can be sure that they are secure against this attack.</p>
    <div>
      <h3>The Attack</h3>
      <a href="#the-attack">
        
      </a>
    </div>
    <p>First, we'll explain the basics of the Web Cache Deception attack. For those who want a more in-depth explanation, Omer's <a href="https://omergil.blogspot.co.il/2017/02/web-cache-deception-attack.html">original post</a> is a great resource.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Wy262IiC2ykqjHqHRbOwk/ac8fc41024662ee444219734662f9f9b/one-way.jpg" />
            
            </figure><p>CC <a href="https://creativecommons.org/licenses/by-sa/2.0/">BY-SA 2.0</a> - <a href="https://www.flickr.com/photos/shelleygibb/2700437267/in/photolist-57CrVe-i4jqNw-gmjSdW-b4eUjD-gmk8in-cFq3pN-2uVYE-2juSjD-d7gDoh-7ac96c-ytw77e-6k2Mw-9NnUjX-6oC4tp-9wFsmg-dsd8bt-bDTPG3-co2zqU-jFdVgc-5DHRZA-66H4P6-7jaCZF-848i8G-9aUjGk-bWVLYW-aCJNHD-buaVVA-nGA4V-soHrms-9quZAv-6MsSqe-nBG2bz-dsd7HT-d7gyTS-9kCfi-4F5xjK-cYYz9N-fFUYF-fQuqJw-dQZTkX-cNMqMJ-qrNmAB-aCJPui-dQXj68-87UrJH-phtpFE-997rCh-oA1ezU-nwpSdp-kswDL6/">image</a> by <a href="https://www.flickr.com/photos/shelleygibb/">shelleygibb</a></p><p>Imagine that you run the social media website <code>example.com</code>, and that each of your users has a newsfeed at <code>example.com/newsfeed</code>. When a user navigates to their newsfeed, the HTTP request generated by their browser might look something like this:</p>
            <pre><code>GET /newsfeed HTTP/1.1
Host: example.com
...</code></pre>
            <p>If you use Cloudflare, you don't want us to cache this request because if we did, some of your users might start seeing other users' newsfeeds instead of their own, which would be very bad. Luckily, as we'll explain below, this request won't be cached because the path in the request (the <code>/newsfeed</code> part) doesn't have a "cacheable file extension" (a file extension such as <code>.jpg</code> or <code>.css</code> that instructs Cloudflare that it's OK to cache the request).</p><p>The trouble begins if your website is configured to be flexible about what kinds of paths it can handle. In particular, the issue arises when requests to a path that doesn't exist (say, <code>/x/y/z</code>) are treated as equivalent to requests to a parent path that <i>does</i> exist (say, <code>/x</code>). For example, what happens if you get a request for the nonexistent path <code>/newsfeed/foo</code>? Depending on how your website is configured, it might just treat such a request as equivalent to a request to <code>/newsfeed</code>. For example, if you're running the <a href="https://docs.djangoproject.com/en/1.10/topics/http/urls/">Django web framework</a>, the following configuration would do just that because the regular expression <code>^newsfeed/</code> matches both <code>newsfeed/</code> and <code>newsfeed/foo</code> (Django routes omit the leading <code>/</code>):</p>
            <pre><code>from django.conf.urls import url

patterns = [url(r'^newsfeed/', ...)]</code></pre>
            <p>And here's where the problem lies. If your website does this, then a request to <code>/newsfeed/foo.jpg</code> will be treated as the same as a request to <code>/newsfeed</code>. But Cloudflare, seeing the <code>.jpg</code> file extension, will think that it's OK to cache this request.</p><p>Now, you might be thinking, "So what? My website never has any links to <code>/newsfeed/foo.jpg</code> or anything like that." That's true, but that doesn't stop <i>other</i> people from trying to convince your users to visit paths like that. For example, an attacker could send this message to somebody:</p><blockquote><p>Hey, check out this cool link! <a href="https://example.com/newsfeed/foo.jpg">https://example.com/newsfeed/foo.jpg</a></p></blockquote><p>If the recipient of the message clicks on the link, they will be taken to their newsfeed. But when the request passes through Cloudflare, since the path ends in `.jpg`, we will cache it. Then the attacker can visit the same URL themselves and their request will be served from our cache, exposing your user's sensitive content.</p>
    <div>
      <h3>Defending Against the Web Cache Deception Attack</h3>
      <a href="#defending-against-the-web-cache-deception-attack">
        
      </a>
    </div>
    <p>The best way to defend against this attack is to ensure that your website isn't so permissive, and never treats requests to nonexistent paths (say, <code>/x/y/z</code>) as equivalent to requests to valid parent paths (say, <code>/x</code>). In the example above, that would mean that requests to <code>/newsfeed/foo</code> or <code>/newsfeed/foo.jpg</code> wouldn't be treated as equivalent to requests to <code>/newsfeed</code>, but would instead result in some kind of error or a redirect to a legitimate page. If we wanted to modify the Django example from above, we could add a <code>$</code> to the end of the regular expression to ensure only exact matches (in this case, a request to <code>/newsfeed/foo</code> will <a href="https://docs.djangoproject.com/en/1.10/topics/http/urls/#error-handling">result in a 404</a>):</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/EctCR6DDUYFIWG9EatWeC/be0902c6bc2d5e558554c3e31c63ec43/Screen-Shot-2018-01-19-at-10.23.10-AM.png" />
            
            </figure><p>We provide many settings that allow you to customize the way our cache will treat requests to your website. For example, if you have a Page Rule enabled for <code>/newsfeed</code> with the Cache Everything setting enabled (it's off by default), then we'll cache requests to <code>/newsfeed</code>, which could be bad. Thus, the best way to <a href="https://www.cloudflare.com/learning/security/glossary/website-security-checklist/">ensure that your website is secure</a> is to understand the rules that our cache uses to determine whether or not a request should be cached.</p>
    <div>
      <h3>How Our Cache Works</h3>
      <a href="#how-our-cache-works">
        
      </a>
    </div>
    <p>When a request comes in to our network, we perform two phases of processing in order to determine whether or not to cache the origin's response to that request:</p><ul><li><p>In the <i>eligibility phase</i>, which is performed when a request first reaches our edge, we inspect the request to determine whether it should be eligible for caching. If we determine that it is not eligible, then we will not cache it. If we determine that it is eligible, then we proceed to a second disqualification phase.</p></li><li><p>In the <i>disqualification phase</i>, which is performed after we've received a response from the origin web server, we inspect the response to determine whether any characteristics disqualify the response from being cached. If nothing disqualifies it, then the response will be cached.</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2TLTgYd4yL4nBAkczqkQHn/c90ac4770733d951a17a54c815516693/page-rule-modal.png" />
            
            </figure><p>_Configuring caching via a Page Rule_</p><p>Note that site-wide settings or Page Rules can affect this logic. Below, when we say "a setting applies" or "the setting is," we mean that either a global setting exists which applies to all requests or a Page Rule with the setting exists that applies to the given request (e.g., a Page Rule for <code>/foo/*</code> applies to requests to <code>/foo/bar</code>, <code>/foo/baz</code>, <code>/foo/bar/baz</code>, etc). Page Rules override global rules if both apply to a given request.</p>
    <div>
      <h3>Eligibility Phase</h3>
      <a href="#eligibility-phase">
        
      </a>
    </div>
    <p>In the <i>eligibility phase</i>, we use characteristics of the request from the client to determine whether or not the request is eligible to be cached. If the request is not eligible, then it will not be cached. If the request is eligible, then we will perform more processing later in the disqualification phase.</p><p>The rules for eligibility are as follows:</p><ul><li><p>If the setting is Standard, Ignore Query String, or No Query String, then:</p><ul><li><p>a request is eligible to be cached if the requested path ends in one of the file extensions listed in Figure 1 below</p></li><li><p>a request <i>may</i> be eligible to be cached (depending on performance-related decisions made by our edge) if the requested path ends in one of the file extensions listed in Figure 2 below</p></li><li><p>a request is eligible to be cached if the request path is <code>/robots.txt</code></p></li></ul></li><li><p>If the setting is Cache Everything, then all requests are eligible to be cached.</p></li></ul><p>In addition to the above rules, if either of the following two conditions hold, then any decision made so far about eligibility will be overridden, and the request will not be eligible to be cached:</p><ul><li><p>If the Cache on Cookie setting is enabled and the configured cookie is <i>not</i> present in a <code>Cookie</code> header, then the request is not eligible to be cached.</p></li><li><p>If the Bypass Cache on Cookie setting is enabled and the configured cookie <i>is</i> present in a <code>Cookie</code> header, then the request is not eligible to be cached.</p></li></ul>
    <div>
      <h3>Disqualification Phase</h3>
      <a href="#disqualification-phase">
        
      </a>
    </div>
    <p>In the <i>disqualification phase</i>, which only occurs if a request has been marked as eligible, characteristics of the response from the origin web server can disqualify a request. If a request is disqualified, then the response will not be cached. If a request is not disqualified, then the response will be cached.</p><p>The rules for disqualification are as follows:</p><ul><li><p>If the setting is Standard, Ignore Query String, or No Query String, or if the setting is Cache Everything <i>and</i> no Edge Cache TTL is present, then:</p><ul><li><p>A <code>Cache-Control</code> header in the response from the origin with any of the following values will disqualify a request, causing it not to be cached:</p><ul><li><p><code>no-cache</code></p></li><li><p><code>max-age=0</code></p></li><li><p><code>private</code></p></li><li><p><code>no-store</code></p></li></ul></li><li><p>An <code>Expires</code> header in the response from the origin indicating any time in the past will disqualify a request, causing it not to be cached.</p></li></ul></li><li><p>If the setting is Cache Everything and an Edge Cache TTL <i>is</i> present, then a request will never be disqualified under any circumstances, and will always be cached.</p></li></ul><p>There is one further set of rules relating to the <code>Set-Cookie</code> header. The following rules only apply if a <code>Set-Cookie</code> header is present:</p><ul><li><p>If the setting is Standard, Ignore Query String, or No Query String, or if the setting is Cache Everything and an Edge Cache TTL is present, then the request will not be disqualified, but the <code>Set-Cookie</code> header will be stripped from the version of the response stored in our cache.</p></li><li><p>If the setting is Cache Everything and no Edge Cache TTL is present, then the request will be disqualified, and it will not be cached. The <code>Set-Cookie</code> header will be stripped from the response that is sent to the client making the request.</p></li></ul>
            <pre><code>class
css
jar
js
jpg
jpeg
gif
ico
png
bmp
pict
csv
doc
docx
xls
xlsx
ps
pdf
pls
ppt
pptx
tif
tiff
ttf
otf
webp
woff
woff2
svg
svgz
eot
eps
ejs
swf
torrent
midi
mid</code></pre>
            <p>_Figure 1: Always Cacheable File Extensions_</p>
            <pre><code>mp3
mp4
mp4v
mpg
mpeg
mov
mkv
flv
webm
wmv
avi
ogg
m4a
wav
aac
ogv
zip
sit
tar
7z
rar
rpm
deb
dmg
iso
img
msi
msp
msm
bin
exe
dll
ra
mka
ts
m4v
asf
mk3d
rm
swf</code></pre>
            <p>_Figure 2: Sometimes Cacheable File Extensions_</p><p>So there you have it. So long as you make sure to follow the advice above, and make sure that your site plays nicely with our cache, you should be secure against the Web Cache Deception attack.</p> ]]></content:encoded>
            <category><![CDATA[Page Rules]]></category>
            <category><![CDATA[Attacks]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[Best Practices]]></category>
            <category><![CDATA[Cache]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <guid isPermaLink="false">6XM5d3s0P5kafUnEofmpM2</guid>
            <dc:creator>Joshua Liebow-Feeser</dc:creator>
        </item>
        <item>
            <title><![CDATA[NANOG - the art of running a network and discussing common operational issues]]></title>
            <link>https://blog.cloudflare.com/nanog-the-art-of-running-a-network-and-discussing-common-operational-issues/</link>
            <pubDate>Thu, 02 Feb 2017 12:15:26 GMT</pubDate>
            <description><![CDATA[ The North American Network Operators Group (NANOG) is the loci of modern Internet innovation and the day-to-day cumulative network-operational knowledge of thousands and thousands of network engineers. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>The <a href="https://nanog.org/">North American Network Operators Group</a> (NANOG) is the loci of modern Internet innovation and the day-to-day cumulative network-operational knowledge of thousands and thousands of network engineers. NANOG itself is a non-profit membership organization; but you don’t need to be a member in order to attend the conference or <a href="https://nanog.org/list/join">join</a> the mailing list. That said, if you can become a member, then you’re helping a good cause.</p><p>The next NANOG conference starts in a few days (February 6-8 2017) in <a href="https://www.nanog.org/meetings/nanog69/agenda">Washington, DC</a>. Nearly 900 network professionals are converging on the city to discuss a variety of network-related issues, both big and small; but all related to running and improving the global Internet. For this upcoming meeting, Cloudflare has three network professionals in attendance. Two from the San Francisco office and one from the London office.</p><p>With the conference starting next week, it seemed a great opportunity to introduce readers of the blog as to why a NANOG conference is so worth attending.</p>
    <div>
      <h2>Tutorials</h2>
      <a href="#tutorials">
        
      </a>
    </div>
    <p>While it seems obvious how to do some network tasks (you unpack the spiffy new wireless router from its box, you set up its security and plug it in); alas the global Internet is somewhat more complex. Even seasoned professionals could do with a recap on how <a href="https://youtu.be/WL0ZTcfSvB4">traceroute</a> actually works, or how <a href="https://youtu.be/9ksfOUyvNi8">DNSSEC</a> operates, or this years subtle BGP complexities, or be enlightened about <a href="https://youtu.be/_KFpXuHqHQg">Optical Networking</a>. All this can assist you with deployments within your networks or datacenter.</p>
    <div>
      <h2>Peering</h2>
      <a href="#peering">
        
      </a>
    </div>
    <p>If there’s one thing that keeps the Internet (a network-of-networks) operating, it’s peering. Peering is the act of bringing together two or more networks to allow traffic (bits, bytes, packets, email messages, web pages, audio and video streams) to flow efficiently and cleanly between source and destination. The Internet is nothing more than a collection of individual networks. NANOG provides one of many forums for diverse network operators to meet face-to-face and negotiate and enable those interconnections.</p><p>While NANOG isn’t the only event that draws networks together to discuss interconnection, it’s one of the early forums to support these peering discussions.</p>
    <div>
      <h2>Security and Reputation</h2>
      <a href="#security-and-reputation">
        
      </a>
    </div>
    <p>In this day-and-age we are brutally aware that security is the number-one issue when using the Internet. This is something to think about when you choose your email password, lock screen password on your laptop, tablet or smartphone. Hint: you should always have a lock screen!</p><p>At NANOG the security discussion is focused on a much deeper part of the global Internet, in the very hardware and software practices, that operate and support the underlying networks we all use on a daily basis. An Internet backbone (rarely seen) is a network that moves traffic from one side of the globe to the other (or from one side of a city to the other). At NANOG we discuss how that underlying infrastructure can operate efficiently, securely, and be continually strengthened. The growth of the Internet over the last handful of decades has pushed the envelope when it comes to hardware deployments and network-complexity. Sometimes it only takes one compromised box to ruin your day. Discussions at conferences like NANOG are vital to the sharing of knowledge and collective improvement of everyone's networks.</p><p>Above the hardware layer (from a network stack point of view) is the Domain Name System (DNS). DNS has always been a major subject of discussion within the NANOG community. It’s very much up to the operational community to make sure that when you type a website name into web browser or type in someone’s email address into your email program that there’s a highly efficient process to convert from names to numbers (numbers, or IP address, are the address book and routing method of the Internet). DNS has had its fair share of focus in the security arena and it comes down to network operators (and their system administrator colleagues) to protect DNS infrastructure.</p>
    <div>
      <h2>Network Operations; best practices and stories of disasters</h2>
      <a href="#network-operations-best-practices-and-stories-of-disasters">
        
      </a>
    </div>
    <p>Nearly everyone knows that bad news sells. It’s a fact. To be honest, the same is the case in the network operator community. However, within NANOG, those stories of disasters are nearly always told from a learning and improvement point of view. There’s simply no need to repeat a failure, no-one enjoys it a second time around. Notable stories have included subjects like <a href="https://youtu.be/XNubCYBprjE">route-leaks</a>, <a href="https://youtu.be/_95pC8khh8Y?list=PLO8DR5ZGla8iHYAM_AL7ZcO8F2F4AcosD">BGP protocol hiccups</a>, <a href="https://www.nanog.org/meetings/nanog17/presentations/vixie.pdf">peering points</a>, and plenty more.</p><p>We simply can’t rule out failures within portions of the network; hence NANOG has spent plenty of time discussing redundancy. The internet operates using routing protocols that explicitly allow for redundancy in the paths that traffic travels. Should a failure occur (a hardware failure, or a fiber cut), the theory is that the traffic will be routed around that failure. This is a recurring topic for NANOG meetings. <a href="https://youtu.be/GMi3pP21nHc">Subsea cables</a> (and their occasional cuts) always make for good talks.</p>
    <div>
      <h2>Network Automation</h2>
      <a href="#network-automation">
        
      </a>
    </div>
    <p>While we learned twenty or more years ago how to type into Internet routers on the command line, those days are quickly becoming history. We simply can’t scale if network operational engineers have to type the same commands into hundreds (or thousands?) of boxes around the globe. We need automation. This is where NANOG has been a leader in this space. Cloudflare has been active in this arena and Mircea Ulinic presented our experience for <a href="https://youtu.be/gV2918bH5_c?list=PLO8DR5ZGla8hcpeEDSBNPE5OrZf70iXZg">Network Automation with Salt and NAPALM</a> at the previous NANOG meeting. Mircea (and Jérôme Fleury) will be giving a follow-up in-depth tutorial on the subject at next week’s meeting.</p>
    <div>
      <h2>Many more subjects covered</h2>
      <a href="#many-more-subjects-covered">
        
      </a>
    </div>
    <p>The first NANOG conference was held June 1994 in Ann Arbor, Michigan and the conference has grown significantly since then. While it’s fun to follow the <a href="https://www.nanog.org/history">history</a>, it’s maybe more important to realize that NANOG has covered a multitude of subjects since that start. Go scan the archives at <a href="https://www.nanog.org/">nanog.org</a> and/or watch some of the online <a href="https://www.youtube.com/user/TeamNANOG/playlists">videos</a>.</p>
    <div>
      <h2>The socials (downtime between technical talks)</h2>
      <a href="#the-socials-downtime-between-technical-talks">
        
      </a>
    </div>
    <p>Let’s not forget the advantages of spending time with other operators within a relaxed setting. After all, sometimes the big conversations happen when spending time over a beer discussing common issues. NANOG has long understood this and it’s clear that the Tuesday evening Beer ’n Gear social is set up specifically to let network geeks both grab a drink (soft drinks included) and poke around with the latest and greatest network hardware on show. The social is as much about blinking lights on shiny network boxes as it is about tracking down that network buddy.</p><p>Oh; there’s a fair number of vendor giveaways (so far there’s 15 hardware and software vendors signed up for next week’s event). After all, who doesn’t need a new t-shirt?</p><p>But there’s more to the downtime and casual hallway conversations. For myself (the author of this blog), I know that sometimes the most important work is done within the hallways during breaks in the meeting vs. standing in front of the microphone presenting at the podium. The industry has long recognized this and the NANOG organizers were one of the early pioneers in providing full-time coffee and snacks that cover the full conference agenda times. Why? Because sometimes you have to step out of the regular presentations to meet and discuss with someone from another network. NANOG knows its audience!</p>
    <div>
      <h2>Besides NANOG, there’s IETF, ICANN, ARIN, and many more</h2>
      <a href="#besides-nanog-theres-ietf-icann-arin-and-many-more">
        
      </a>
    </div>
    <p>NANOG isn’t the only forum to discuss network operational issues, however it’s arguably the largest. It started off as a “North American” entity; however, in the same way that the Internet doesn’t have country barriers, NANOG meetings (which take place in the US, Canada and at least once in the Caribbean) have fostered an online community that has grown into a global resource. The mailing list (well worth reviewing) is a bastion of networking discussions.</p><p>In a different realm, the <a href="https://www.ietf.org/about/">Internet Engineering Task Force</a> (IETF) focuses on protocol standards. Its existence is why diverse entities can communicate. Operators participate in IETF meetings; however, it’s a meeting focused outside of the core operational mindset.</p><p>Central to the Internet’s existence is ICANN. Meeting three times a year at locations around the globe, it focused on the governance arena and in domain names and related items. Within the meetings there’s an excellent <a href="https://www.icann.org/resources/pages/tech-day-archive-2015-10-15-en">Tech Day</a>.</p><p>In the numbers arena <a href="https://arin.net/">ARIN</a> is an example of a regional routing registry (an RIR) that runs members meeting. An RIR deals with allocating resources like IP addresses and AS numbers. ARIN focuses on the North American area and sometimes holds its meetings alongside NANOG meetings.</p><p>ARIN’s counterparts in other parts of the world also hold meetings. Sometimes they simply focus on resource policy and sometimes they also focus on network operational issues. For example RIPE (in Europe, Central Asia and the Middle East) runs a five-day meeting that covers operational and policy issues. APNIC (Asia Pacific), AFRINIC (Africa), LACNIC (Latin America &amp; Caribbean) all do similar variations. There isn’t one absolute method and that's a good thing. It’s worth pointing out that APNIC holds it’s members meetings once a year in conjunction with <a href="https://2017.apricot.net/">APRICOT</a> which is the primary operations meeting in the Asia Pacific region.</p><p>While NANOG is somewhat focused on North America, there are also the regional NOGs. These regional NOGs are vital to help the education of network operators globally. Japan has JANOG, Southern Africa has SAFNOG, MENOG in the Middle East, AUSNOG &amp; NZNOG in Australia &amp; New Zealand, DENOG in Germany, PHNOG in the Philippines, and just to be different, the UK has UKNOF (“Forum” vs. “Group”). It would be hard to list them all; but each is a worthwhile forum for operational discussions.</p><p>Peering specific meetings also exist. <a href="https://www.peeringforum.com/">Global Peering Forum</a>, <a href="https://www.peering-forum.eu/">European Peering Forum</a>, and <a href="http://www.lacnog.org/wg-peeringforum/">Peering Forum de LACNOG</a> for example. Those focus on bilateral meetings within a group of network operators or administrators and specifically focus on interconnect agreements.</p><p>In the commercial realms there’s plenty of other meetings that are attended by networks like Cloudflare. <a href="https://council.ptc.org/">PTC</a> and <a href="https://www.internationaltelecomsweek.com/">International Telecoms Week</a> (ITW) are global telecom meetings specifically designed to host one-to-one (bilateral) meetings. They are very commercial in nature and less operational in focus.</p>
    <div>
      <h2>NANOG isn’t the only forum Cloudflare attends</h2>
      <a href="#nanog-isnt-the-only-forum-cloudflare-attends">
        
      </a>
    </div>
    <p>As you would guess, you will find our network team at RIR meetings, sometimes at IETF meetings, sometimes at ICANN meetings, often at various regional NOG meetings (like SANOG in South East Asia, NoNOG in Norway, RONOG in Romania, AUSNOG/NZNOG in Australia/New Zealand and many other NOGs). We get around; however, we also run a global network and we need to interact with many many networks around the globe. These meetings provide an ideal opportunity for one-to-one discussions.</p><p>If you've heard something you like from Cloudflare at one of these operational-focused conferences, then check out our <a href="https://www.cloudflare.com/join-our-team/">jobs listings</a> (in various North American cities, London, Singapore, and beyond!)</p> ]]></content:encoded>
            <category><![CDATA[BGP]]></category>
            <category><![CDATA[Peering]]></category>
            <category><![CDATA[Events]]></category>
            <category><![CDATA[Salt]]></category>
            <category><![CDATA[Best Practices]]></category>
            <guid isPermaLink="false">510pAYiPNeGP55tWvRXwko</guid>
            <dc:creator>Martin J Levy</dc:creator>
        </item>
        <item>
            <title><![CDATA[Webcast: Hardening Microservices Security]]></title>
            <link>https://blog.cloudflare.com/webcast-hardening-microservices-security/</link>
            <pubDate>Fri, 16 Sep 2016 19:23:07 GMT</pubDate>
            <description><![CDATA[ Microservices is one of the buzz words of the moment. Beyond the buzz, microservices architecture offers a great opportunity for developers to rethink how they design, develop, and secure applications. ]]></description>
            <content:encoded><![CDATA[ <p><i>Microservices</i> is one of the buzz words of the moment. Beyond the buzz, microservices architecture offers a great opportunity for developers to rethink how they design, develop, and secure applications.</p><p>On <b>Wednesday, September 21st, 2016 at 10am PT/1pm ET</b> join SANS Technology Institute instructor and courseware author, <a href="https://www.sans.org/instructors/david-hoelzer">David Holzer</a>, as well as CloudFlare Solutions Engineer, <a href="https://www.linkedin.com/in/mattsilverlock">Matthew Silverlock</a>, as they discuss best practices for adopting and deploying microservices securely. During the session they will cover:</p><ul><li><p>How microservices differ from SOA or monolithic architectures</p></li><li><p>Best practices for adopting and deploying secure microservices for production use</p></li><li><p>Avoiding continuous delivery of new vulnerabilities</p></li><li><p>Limiting attack vectors on a growing number of API endpoints</p></li><li><p>Protecting Internet-facing services from resource exhaustion</p></li></ul><p>Don't miss this chance to learn from the pros. <a href="https://www.sans.org/webcasts/102822">Register now!</a></p> ]]></content:encoded>
            <category><![CDATA[Events]]></category>
            <category><![CDATA[API]]></category>
            <category><![CDATA[Best Practices]]></category>
            <category><![CDATA[Webinars]]></category>
            <guid isPermaLink="false">7kytWCCerMKMQDjIzCPhyb</guid>
            <dc:creator>Ryan Knight</dc:creator>
        </item>
        <item>
            <title><![CDATA[This is strictly a violation of the TCP specification]]></title>
            <link>https://blog.cloudflare.com/this-is-strictly-a-violation-of-the-tcp-specification/</link>
            <pubDate>Fri, 12 Aug 2016 13:03:26 GMT</pubDate>
            <description><![CDATA[ I was asked to debug another weird issue on our network. Apparently every now and then a connection going through CloudFlare would time out with 522 HTTP error. ]]></description>
            <content:encoded><![CDATA[ <p>I was asked to debug another weird issue on our network. Apparently every now and then a connection going through CloudFlare would time out with 522 HTTP error.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2G5eLlitmSrxYlXxvBL47P/3b7ccb5c5278e82642aa321fb89abd21/16132759228_7eed8f32d1_z.jpg" />
            
            </figure><p><a href="https://creativecommons.org/lclosicenses/by/2.0/">CC BY 2.0</a> <a href="https://www.flickr.com/photos/cosmicherb70/16132759228/">image</a> by Chris Combe</p><p><a href="https://support.cloudflare.com/hc/en-us/articles/200171906-Error-522-Connection-timed-out">522 error on CloudFlare</a> indicates a connection issue between our edge server and the origin server. Most often the blame is on the origin server side - the origin server is slow, offline or encountering high packet loss. Less often the problem is on our side.</p><p>In the case I was debugging it was neither. The internet connectivity between CloudFlare and origin was perfect. No packet loss, flat latency. So why did we see a 522 error?</p><p>The root cause of this issue was pretty complex. After a lot of debugging we identified an important symptom: sometimes, once in thousands of runs, our test program failed to establish a connection between two daemons on the same machine. To be precise, an NGINX instance was trying to establish a TCP connection to our internal acceleration service on localhost. This failed with a timeout error.</p><p>Once we knew what to look for we were able to reproduce this with good old <code>netcat</code>. After a couple of dozen of runs this is what we saw:</p>
            <pre><code>$ nc 127.0.0.1 5000  -v
nc: connect to 127.0.0.1 port 5000 (tcp) failed: Connection timed out</code></pre>
            <p>The view from <code>strace</code>:</p>
            <pre><code>socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 3
connect(3, {sa_family=AF_INET, sin_port=htons(5000), sin_addr=inet_addr("127.0.0.1")}, 16) = -110 ETIMEDOUT</code></pre>
            <p><code>netcat</code> calls <code>connect()</code> to establish a connection to localhost. This takes a long time and eventually fails with <code>ETIMEDOUT</code> error. Tcpdump confirms that <code>connect()</code> did send SYN packets over loopback but never received any SYN+ACKs:</p>
            <pre><code>$ sudo tcpdump -ni lo port 5000 -ttttt -S
00:00:02.405887 IP 127.0.0.12.59220 &gt; 127.0.0.1.5000: Flags [S], seq 220451580, win 43690, options [mss 65495,sackOK,TS val 15971607 ecr 0,nop,wscale 7], length 0
00:00:03.406625 IP 127.0.0.12.59220 &gt; 127.0.0.1.5000: Flags [S], seq 220451580, win 43690, options [mss 65495,sackOK,TS val 15971857 ecr 0,nop,wscale 7], length 0
... 5 more ...</code></pre>
            <p>Hold on. What just happened here?</p><p>Well, we called <code>connect()</code> to localhost and it timed out. The SYN packets went off over loopback to localhost but were never answered.</p>
    <div>
      <h3>Loopback congestion</h3>
      <a href="#loopback-congestion">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3bdZxdRdturLuABS6ep2WV/c8fa6def43cc2ce57a89e2ee86508129/26449341072_009ae28070_z.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by/2.0/">CC BY 2.0</a> <a href="https://www.flickr.com/photos/akj1706/26449341072">image</a> by akj1706</p><p>The first thought is about Internet stability. Maybe the SYN packets were lost? A little known fact is that it's not possible to have any packet loss or congestion on the loopback interface. The <a href="http://lxr.free-electrons.com/source/drivers/net/loopback.c">loopback works magically</a>: when an application sends packets to it, it immediately, still within the <code>send</code> syscall handling, gets delivered to the appropriate target. There is no buffering over loopback. Calling <code>send</code> over loopback triggers iptables, network stack delivery mechanisms and <i>delivers</i> the packet to the appropriate queue of the target application. Assuming the target application has some space in its buffers, packet loss over loopback is not possible.</p>
    <div>
      <h3>Maybe the listening application misbehaved?</h3>
      <a href="#maybe-the-listening-application-misbehaved">
        
      </a>
    </div>
    <p>Under normal circumstances connections to localhost are not supposed to time out. There is one corner case when this may happen though - when the listening application does not call <code>accept()</code> fast enough.</p><p>When that happens, the default behavior is to drop the new SYN packets. If the listening socket has a full accept queue, then new SYN packets will be dropped. The intention is to cause push-back, to slow down the rate of incoming connections. The peers should eventually re-send SYN packets, and hopefully by that time the accept queue will be freed. This behavior is controlled by the <code>tcp_abort_on_overflow</code> <a href="https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt">sysctl</a>.</p><p>But this accept queue overflow did not happen in our case. Our listening application had an empty accept queue. We checked this with the <code>ss</code> command:</p>
            <pre><code>$ ss -n4lt 'sport = :5000'
State      Recv-Q Send-Q  Local Address:Port    Peer Address:Port
LISTEN     0      128                 *:5000               *:*</code></pre>
            <p>The <code>Send-Q</code> column <a href="https://github.com/torvalds/linux/blob/c1e64e298b8cad309091b95d8436a0255c84f54a/net/ipv4/tcp_diag.c#L26">shows the backlog / accept queue size</a> given to <code>listen()</code> syscall - 128 in our case. The <code>Recv-Q</code> reports on the number of outstanding connections in the accept queue - zero.</p>
    <div>
      <h3>The problem</h3>
      <a href="#the-problem">
        
      </a>
    </div>
    <p>To recap: we are establishing connections to localhost. Most of them work fine but sometimes the <code>connect()</code> syscall times out. The SYN packets are being sent over loopback. Because it's loopback they <i>are</i> being delivered to the listening socket. The listening socket accept queue is empty, but we see no SYN+ACKs.</p><p>Further investigation revealed something peculiar. We noticed hundreds of CLOSE_WAIT sockets:</p>
            <pre><code>$ ss -n4t | head
State      Recv-Q Send-Q  Local Address:Port    Peer Address:Port
CLOSE-WAIT 1      0           127.0.0.1:5000       127.0.0.1:36599
CLOSE-WAIT 1      0           127.0.0.1:5000       127.0.0.1:36467
CLOSE-WAIT 1      0           127.0.0.1:5000       127.0.0.1:36154
CLOSE-WAIT 1      0           127.0.0.1:5000       127.0.0.1:36412
CLOSE-WAIT 1      0           127.0.0.1:5000       127.0.0.1:36536
...</code></pre>
            
    <div>
      <h3>What is CLOSE_WAIT anyway?</h3>
      <a href="#what-is-close_wait-anyway">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Ts0oIJkae9HpoHZtvZAsf/b5a86e303739746bb9f5afecb8f2f90a/20147524535_8c6ac1c853_z.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by/2.0/">CC BY 2.0</a> <a href="https://www.flickr.com/photos/sidelong/20147524535">image</a> by DaveBleasdale</p><p>Citing the <a href="https://access.redhat.com/solutions/437133">Red Hat docs</a>:</p><p><i>CLOSE_WAIT - Indicates that the server has received the first FIN signal from the client and the connection is in the process of being closed. This means the socket is waiting for the application to execute </i><code><i>close()</i></code><i>. A socket can be in CLOSE_WAIT state indefinitely until the application closes it. Faulty scenarios would be like a file descriptor leak: server not executing </i><code><i>close()</i></code><i> on sockets leading to pile up of CLOSE_WAIT sockets.</i></p><p>This makes sense. Indeed, we were able to confirm the listening application leaks sockets. Hurray, good progress!</p><p>The leaking sockets don't explain everything though.</p><p>Usually a Linux process can open up to 1,024 file descriptors. If our application did run out of file descriptors the <code>accept</code> syscall would return the EMFILE error. If the application further mishandled this error case, this could result in losing incoming SYN packets. Failed <code>accept</code> calls will <a href="https://github.com/torvalds/linux/blob/c1e64e298b8cad309091b95d8436a0255c84f54a/net/socket.c#L1438">not dequeue a socket from accept queue</a>, causing the accept queue to grow. The accept queue will not be drained and will eventually overflow. An overflowing accept queue could result in dropped SYN packets and failing connection attempts.</p><p>But this is not what happened here. Our application hasn't run out of file descriptors yet. This can be verified by counting file descriptors in <code>/proc/&lt;pid&gt;/fd</code> directory:</p>
            <pre><code>$ ls /proc/` pidof listener `/fd | wc -l
517</code></pre>
            <p>517 file descriptors are comfortably far from the 1,024 file descriptor limit. Also, we earlier showed with <code>ss</code> that the accept queue is empty. So why did our connections time out?</p>
    <div>
      <h3>What really happens</h3>
      <a href="#what-really-happens">
        
      </a>
    </div>
    <p>The root cause of the problem is definitely our application leaking sockets. The symptoms though, the connection timing out, are still unexplained.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6SnoSfKadf8y4R4YJk1IYw/0243e1e7d60760e303fcf65739137bcb/Screen-Shot-2016-08-11-at-23-59-05-1.png" />
            
            </figure><p>Time to raise the curtain of doubt. Here is what happens.</p><p>The listening application leaks sockets, they are stuck in CLOSE_WAIT TCP state forever. These sockets look like (127.0.0.1:5000, 127.0.0.1:some-port). The client socket at the other end of the connection is (127.0.0.1:some-port, 127.0.0.1:5000), and is properly closed and cleaned up.</p><p>When the client application quits, the (127.0.0.1:some-port, 127.0.0.1:5000) socket enters the FIN_WAIT_1 state and then quickly transitions to FIN_WAIT_2. The FIN_WAIT_2 state should move on to TIME_WAIT if the client received FIN packet, but this never happens. The FIN_WAIT_2 eventually times out. On Linux this is 60 seconds, controlled by <code>net.ipv4.tcp_fin_timeout</code> sysctl.</p><p>This is where the problem starts. The (127.0.0.1:5000, 127.0.0.1:some-port) socket is still in CLOSE_WAIT state, while (127.0.0.1:some-port, 127.0.0.1:5000) has been cleaned up and is ready to be reused. When this happens the result is a total mess. One part of the socket won't be able to advance from the SYN_SENT state, while the other part is stuck in CLOSE_WAIT. The SYN_SENT socket will eventually give up failing with ETIMEDOUT.</p>
    <div>
      <h3>How to reproduce</h3>
      <a href="#how-to-reproduce">
        
      </a>
    </div>
    <p>It all starts with a listening application that leaks sockets and forgets to call <code>close()</code>. This kind of bug does happen in complex applications. An example <a href="https://github.com/cloudflare/cloudflare-blog/blob/master/2016-08-time-out/listener.go">buggy code is available here</a>. When you run it nothing will happen initially. <code>ss</code> will show a usual listening socket:</p>
            <pre><code>$ go build listener.go &amp;&amp; ./listener &amp;
$ ss -n4tpl 'sport = :5000'
State      Recv-Q Send-Q  Local Address:Port    Peer Address:Port
LISTEN     0      128                 *:5000               *:*      users:(("listener",81425,3))
</code></pre>
            <p>Then we have a client application. The client behaves correctly - it establishes a connection and after a while it closes it. We can demonstrate this with <code>nc</code>:</p>
            <pre><code>$ nc -4 localhost 5000 &amp;
$ ss -n4tp '( dport = :5000 or sport = :5000 )'
State      Recv-Q Send-Q  Local Address:Port    Peer Address:Port
ESTAB      0      0           127.0.0.1:5000       127.0.0.1:36613  users:(("listener",81425,5))
ESTAB      0      0           127.0.0.1:36613      127.0.0.1:5000   users:(("nc",81456,3))</code></pre>
            <p>As you see above <code>ss</code> shows two TCP sockets, representing the two ends of the TCP connection. The client one is (127.0.0.1:36613, 127.0.0.1:5000), the server one (127.0.0.1:5000, 127.0.0.1:36613).</p><p>The next step is to gracefully close the client connection:</p>
            <pre><code>$ kill `pidof nc`</code></pre>
            <p>Now the connections enter TCP cleanup stages: FIN_WAIT_2 for the client connection, and CLOSE_WAIT for the server one (if you want to read more about these TCP states <a href="https://benohead.com/tcp-about-fin_wait_2-time_wait-and-close_wait/">here's a recommended read</a>):</p>
            <pre><code>$ ss -n4tp
State      Recv-Q Send-Q  Local Address:Port    Peer Address:Port
CLOSE-WAIT 1      0           127.0.0.1:5000       127.0.0.1:36613  users:(("listener",81425,5))
FIN-WAIT-2 0      0           127.0.0.1:36613      127.0.0.1:5000</code></pre>
            <p>After a while FIN_WAIT_2 will expire:</p>
            <pre><code>$ ss -n4tp
State      Recv-Q Send-Q  Local Address:Port    Peer Address:Port
CLOSE-WAIT 1      0           127.0.0.1:5000       127.0.0.1:36613  users:(("listener",81425,5))</code></pre>
            <p>But the CLOSE_WAIT socket stays in! Since we have a leaked file descriptor in the <code>listener</code> program, the kernel is not allowed to move it to FIN_WAIT state. It is stuck in CLOSE_WAIT indefinitely. This stray CLOSE_WAIT would not be a problem if only the same port pair was never reused. Unfortunately, it happens and causes the problem.</p><p>To see this we need to launch hundreds of <code>nc</code> instances and hope the kernel will assign the colliding port number to one of them. The affected <code>nc</code> will be stuck in <code>connect()</code> for a while:</p>
            <pre><code>$ nc -v -4 localhost 5000 -w0
...</code></pre>
            <p>We can use the <code>ss</code> to confirm that the ports indeed collide:</p>
            <pre><code>SYN-SENT   0  1   127.0.0.1:36613      127.0.0.1:5000   users:(("nc",89908,3))
CLOSE-WAIT 1  0   127.0.0.1:5000       127.0.0.1:36613  users:(("listener",81425,5))</code></pre>
            <p>In our example the kernel allocated source address (127.0.0.1:36613) to the <code>nc</code> process. This TCP flow is okay to be used for a connection going <i>to</i> the listener application. But the listener will not be able to allocate a flow in reverse direction since (127.0.0.1:5000, 127.0.0.1:36613) from previous connections is still being used and remains with CLOSE_WAIT state.</p><p>The kernel gets confused. It retries the SYN packets, but will never respond since the other TCP socket is stick in the CLOSE_WAIT state. Eventually our affected <code>netcat</code> will die with unhappy ETIMEDOUT error message:</p>
            <pre><code>...
nc: connect to localhost port 5000 (tcp) failed: Connection timed out</code></pre>
            <p>If you want to reproduce this weird scenario consider running this script. It will greatly increase the probability of netcat hitting the conflicted socket:</p>
            <pre><code>$ for i in `seq 500`; do nc -v -4 -s 127.0.0.1 localhost 5000 -w0; done</code></pre>
            <p>A little known fact is that the source port automatically assigned by the kernel is incremental, unless you <a href="https://idea.popcount.org/2014-04-03-bind-before-connect/">select the source IP manually</a>. In such case the source port is random. This bash script will create a minefield of CLOSE_WAIT sockets randomly distributed across the ephemeral port range.</p>
    <div>
      <h3>Final words</h3>
      <a href="#final-words">
        
      </a>
    </div>
    <p>If there's a moral from the story it's to watch out for CLOSE_WAIT sockets. Their presence indicate leaking sockets, and with leaking sockets some incoming connections may time out. Presence of many FIN_WAIT_2 sockets says the problem is not on current machine but on the remote end of the connection.</p><p>Furthermore, this bug shows that it is possible for the states of the two ends of a TCP connection to be at odds, even if the connection is over the loopback interface.</p><p>It seems that the design decisions made by the BSD Socket API have unexpected long lasting consequences. If you think about it - why exactly the socket can automatically expire the FIN_WAIT state, but can't move off from CLOSE_WAIT after some grace time. This is very confusing... And it should be! The original TCP specification does not allow automatic state transition after FIN_WAIT_2 state! According to the spec FIN_WAIT_2 is supposed to stay running until the application on the other side cleans up.</p><p>Let me leave you with the <a href="http://man7.org/linux/man-pages/man7/tcp.7.html"><code>tcp(7)</code> manpage</a> describing the <code>tcp_fin_timeout</code> setting:</p>
            <pre><code>tcp_fin_timeout (integer; default: 60)
      This specifies how many seconds to wait for a final FIN packet
      before the socket is forcibly closed.  This is strictly a
      violation of the TCP specification, but required to prevent
      denial-of-service attacks.</code></pre>
            <p>I think now we understand why automatically closing FIN_WAIT_2 is strictly speaking a violation of the TCP specification.</p><p><i>Do you enjoy playing with low level networking bits? Are you interested in dealing with some of the largest DDoS attacks ever seen?</i></p><p><i>If so you should definitely have a look at the </i><a href="https://www.cloudflare.com/join-our-team/"><i>open positions</i></a><i> in our London, San Francisco, Singapore, Champaign (IL) and Austin (TX) offices!</i></p> ]]></content:encoded>
            <category><![CDATA[TCP]]></category>
            <category><![CDATA[Best Practices]]></category>
            <guid isPermaLink="false">3jgOd9ihenkLmtrrWkFiZr</guid>
            <dc:creator>Marek Majkowski</dc:creator>
        </item>
        <item>
            <title><![CDATA[CloudFlare's JSON-powered Documentation Generator]]></title>
            <link>https://blog.cloudflare.com/cloudflares-json-powered-documentation-generator/</link>
            <pubDate>Wed, 03 Aug 2016 11:26:13 GMT</pubDate>
            <description><![CDATA[ Everything that it's possible to do in the CloudFlare Dashboard is also possible through our RESTful API. We use the same API to power the dashboard itself. ]]></description>
            <content:encoded><![CDATA[ <p>Everything that it's possible to do in the CloudFlare Dashboard is also possible through our <a href="https://api.cloudflare.com/">RESTful API</a>. We use the same API to power the dashboard itself.</p><p>In order to keep track of all our endpoints, we use a rich notation called <a href="http://json-schema.org/">JSON Hyper-Schema</a>. These schemas are used to generate the complete HTML documentation that you can see at <a href="https://api.cloudflare.com">https://api.cloudflare.com</a>. Today, we want to share a set of tools that we use in this process.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1n32fKj9DXVOc6QY91VJVG/6480da799b02968cec8ee65976c104b9/6958818812_8331f1d4bb_z.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by/2.0/">CC BY 2.0</a> <a href="https://www.flickr.com/photos/bevrichardmartin/6958818812/in/photolist-bAVLoG-dZkRa-6oG7DU-aAeaJK-6dpnNT-9zZaWW-aB2QwA-8itsnP-3mCTJi-5e7V72-6UJiQ8-dZkDA-dSLMm6-urUYS-9kzL5y-ds3U9a-dZkyv-azQKum-dZkQe-9URudj-4Anhwd-oWfUsx-dZkKJ-93ifzr-am61PB-azK1kb-oCxxwy-azKje8-6wQeTG-avwMcD-6DisSN-zovQbm-8ZxtNc-dZkCB-dZkH3-8qkKDo-9kqMrP-dZkLt-AQqNy-9A2a8R-5sr3DS-8LiD8F-8ZAxUj-dZkB2-bubYjp-5vGe5m-7XQ1wY-DoA4q-4bbw9s-aQsVjr">image</a> by <a href="https://www.flickr.com/photos/bevrichardmartin/">Richard Martin</a></p>
    <div>
      <h3>JSON Schema</h3>
      <a href="#json-schema">
        
      </a>
    </div>
    <p>JSON Schema is a powerful way to describe your JSON data format. It provides <b>complete structural validation</b> and can be used for things like validation of incoming requests. JSON Hyper-Schema further extends this format with links and gives you a way describe your API.</p>
    <div>
      <h4>JSON Schema Example</h4>
      <a href="#json-schema-example">
        
      </a>
    </div>
    
            <pre><code>{
  "type": "object",
  "properties": {
    "name": { "type": "string" },
    "age": { "type": "number" },
    "address": {
      "type": "object",
      "properties": {
        "street_address": { "type": "string" },
        "city": { "type": "string" },
        "state": { "type": "string" },
        "country": { "type" : "string" }
      }
    }
  }
}</code></pre>
            
    <div>
      <h4>Matching JSON</h4>
      <a href="#matching-json">
        
      </a>
    </div>
    
            <pre><code>{
  "name": "John Doe",
  "age": 45,
  "address": {
    "street_address": "12433 State St NW",
    "city": "Atlanta",
    "state": "Georgia",
    "country": "United States"
  }
}</code></pre>
            <p>JSON Schema supports all simple data types. It also defines some special meta properties including <code>title</code>, <code>description</code>, <code>default</code>, <code>enum</code>, <code>id</code>, <code>$ref</code>, <code>$schema</code>, <code>allOf</code>, <code>anyOf</code>, <code>oneOf</code>, and more. The most powerful construct is <code>$ref</code>. It provides similar functionality to hypertext links. You can reference external schemas (external reference) or a fragment inside the current schema (internal reference). This way you can easily compose and combine multiple schemas together without repeating yourself.</p><p>JSON Hyper-Schema introduces another property called <b>links</b> where you define your API links, methods, request and response formats, etc. The best way to learn more about JSON Schemas is to visit <a href="https://spacetelescope.github.io/understanding-json-schema/">Understanding JSON Schema</a>. You can also visit the official <a href="http://json-schema.org/">specification website</a> or <a href="https://github.com/json-schema/json-schema/wiki">wiki</a>. If you want to jump straight into examples, try <a href="https://github.com/cloudflare/doca/tree/master/example">this</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Sq4gfOa1y0eMxUgAWoZCU/d648e5824f765a8f37c1731d212915cc/6825340663_f67969a7aa_z.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by/2.0/">CC BY 2.0</a> <a href="https://www.flickr.com/photos/twalmsley/6825340663/in/photolist-bp8DZi-9pvEMm-fnCNLh-6t59b-aiJwYD-76pUCy-76pUcm-76pPCC-bkAcgF-9mrgnw-gG1f-6q4NgW-9mrgw7-ta4a-52Yp1-7Z7Z56-8mM1qQ-5pJv9j-6bpLQw-8qaRq-dZXYpi-5nPGJe-8UqDp-8ZSb3r-4n8Lvx-47d3Bj-8qaR4-KCEiz-47d3xA-6fpSGM-478Y7R-a7siMj-7U6zet-478Y8T-7Y3UXm-478Y4D-o5vmz-47d3yG-47d3cA-ofkBv8-47d3D7-9rPdZg-4BvPB5-SbdQQ-6SVtrJ-478XYZ-qVHyFt-4dvxiv-7W2Uo-6BSdd3">image</a> by <a href="https://www.flickr.com/photos/twalmsley/">Tony Walmsley</a></p>
    <div>
      <h3>Generating Documentation: Tools</h3>
      <a href="#generating-documentation-tools">
        
      </a>
    </div>
    <p>We already have an open source library that can generate complete HTML documentation from JSON Schema files and <a href="http://handlebarsjs.com/">Handlebars.js</a> templates. It's called <a href="https://github.com/cloudflare/json-schema-docs-generator">JSON Schema Docs Generator (JSDC)</a>. However, it has some drawbacks that make it hard to use for other teams:</p><ul><li><p>Complicated configuration</p></li><li><p>It's necessary to rebuild everything with every change (slow)</p></li><li><p>Templates cannot have their own dependencies</p></li><li><p>All additional scripting must be in a different place</p></li><li><p>It is hard to further customize it (splitting into sections, pages)</p></li></ul><p>We wanted something more modular and extensible that addresses the above issues, while still getting ready-to-go output just with a few commands. So, we created a toolchain based on JSDC and modern JavaScript libraries. This article is not just a description for how to use these tools, but also an explanation of our design decisions. <b>It is described in a bottom-up manner.</b> You can skip to the bottom if you are not interested in the technical discussion and just want to get started using the tools.</p>
    <div>
      <h4><a href="https://github.com/cloudflare/json-schema-loader">json-schema-loader</a></h4>
      <a href="#">
        
      </a>
    </div>
    <p>JSON Schema files need to be preprocessed first. <b>The first thing we have to do is to resolve their references (</b><code><b>$ref</b></code><b>).</b> This can be quite a complex task since every schema can have multiple references, some of which are external (referencing even more schemas). Also, when we make a change, we want to only resolve schemas that need to be resolved. We decided to use <a href="https://webpack.github.io/">Webpack</a> for this task because a webpack loader has some great properties:</p><ul><li><p>It's a simple function that transforms input into output</p></li><li><p>It can <b>maintain and track additional file dependencies</b></p></li><li><p>It can cache the output</p></li><li><p>It can be chained</p></li><li><p>Webpack watches all changes in required modules and their dependencies</p></li></ul><p>Our loader uses the 3rd party <a href="https://github.com/BigstickCarpet/json-schema-ref-parser">JSON Schema Ref Parser</a> library. It does not adhere to the JSON Schema specification related to <code>id</code> properties and their ability to change reference scope since it is <a href="https://github.com/json-schema/json-schema/wiki/The-%22id%22-conundrum">ambiguous</a>. However, it does implement the <a href="https://tools.ietf.org/html/rfc6901">JSON Pointer</a> and <a href="https://tools.ietf.org/html/draft-pbryan-zyp-json-ref-03">JSON Reference</a> specifications. What does this mean? You can still combine relative (or absolute) paths with JSON Pointers and use references like:</p>
            <pre><code> "$ref": "./product.json#/definitions/identifier"</code></pre>
            <p>but <code>id</code>s are simply ignored and the scope is always relative to the root. That makes reasoning about our schemas easier. That being said, a unique root id is still expected for other purposes.</p>
    <div>
      <h4><a href="https://github.com/cloudflare/json-schema-example-loader">json-schema-example-loader</a></h4>
      <a href="#">
        
      </a>
    </div>
    <p>Finally, we have resolved schemas. Unfortunately, their structure doesn't really match our final HTML documentation. It can be deeply nested, and we want to present our users with nice examples of API requests and responses. We need to do further transformations. We must remove some original properties and precompute new ones. <b>The goal is to create a data structure that will better fit our UI components.</b> Please check out the <a href="https://github.com/cloudflare/json-schema-example-loader">project page</a> for more details.</p><p>You might be asking why we use another webpack loader and why this isn't part of our web application instead. The main reason is performance. We do not want to bog down browsers by doing these transformations repeatedly since JSON Schemas can be arbitrarily nested and very complex and the output can be precomputed.</p>
    <div>
      <h4><a href="https://github.com/cloudflare/doca-bootstrap-theme">doca-bootstrap-theme</a></h4>
      <a href="#">
        
      </a>
    </div>
    <p>With both of these webpack loaders, you can easily use your favorite JavaScript framework to build your own application. However, we want to make doc generation accessible even to people who don't have time to build their own app. So, we created a set of templates that match the output of <a href="https://github.com/cloudflare/json-schema-example-loader">json-schema-example-loader</a>. These templates use the popular library <a href="https://facebook.github.io/react/">React</a>. Why React?</p><ul><li><p>It can be used and rendered server-side</p></li><li><p>We can now bake additional features into components (e.g., show/hide...)</p></li><li><p>It is easily composable</p></li><li><p>We really really like it :)</p></li></ul><p><a href="https://github.com/cloudflare/doca-bootstrap-theme">doca-bootstrap-theme</a> is a generic theme based on <a href="http://getbootstrap.com/">Twitter Bootstrap v3</a>. We also have our private doca-cf-theme used by <a href="https://api.cloudflare.com">https://api.cloudflare.com</a>. We encourage you to fork it and create your own awesome themes!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1V8VwvHS59DruzMdxH6JOK/c5294bf8356413d60873d6b8b8b50752/2318887095_065ee2b724_o.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by/2.0/">CC BY 2.0</a> <a href="https://www.flickr.com/photos/maiacoimbra/2318887095/in/photolist-4wUUkZ-jcJ3Zi-2LP5F-6un93i-4rStsx-58wwkE-58swNn-2p9jr4-58shYz-58sjBg-6hh87g-58wDBG-8Q1NSg-5Yua6y-58wqyC-aonbLU-58wHGC-4xUiGv-4S3m1W-5YpQ9n-58tcTV-5YpQSF-9Kv4KE-58thdg-58wNey-9AuY4n-58tfoH-9B1XdE-9U5eSJ-4wV1zV-4sfeTf-58tF8R-58tizR-58x1Fq-4qhVKn-5Yu43U-4xYvCq-4bDKSq-58tbuP-nqcLsb-e1HCp9-danScf-7cx2Zc-vDYJPz-fpe9K-4qa4Cz-74nRWk-58tNyB-58sL3x-8Q1NZc">image</a> by <a href="https://www.flickr.com/photos/maiacoimbra/">Maia Coimbra</a></p>
    <div>
      <h4><a href="https://github.com/cloudflare/doca">doca</a></h4>
      <a href="#">
        
      </a>
    </div>
    <p>So, we have loaders and nice UI components. Now, it's time to put it all together. We have something that can do just that! We call it <code>doca</code>. doca is a command-line tool written in Node.js that scaffolds the whole application for you. It is actually pretty simple. It takes fine-tuned webpack/redux/babel based <a href="https://github.com/cloudflare/doca/tree/master/app">application</a>, copies it into a destination of your choice, and does a few simple replacements.</p><p>Since all hard work is done by webpack loaders and all UI components live in a different theme package, the final app can be pretty minimal. It's not intended to be updated by the <code>doca</code> tool. <b>You should only use </b><code><b>doca</b></code><b> once.</b> Otherwise, it would just rewrite your application, which is not desirable if you made some custom modifications. For example, you might want to add <a href="https://github.com/reactjs/react-router">React Router</a> to create multi-page documentation.</p><p><code>doca</code> contains webpack configs for development and production modes. You can build a completely static version with no JavaScript. It transforms the output of <a href="https://github.com/cloudflare/json-schema-example-loader">json-schema-example-loader</a> into an immutable data structure (using <a href="https://facebook.github.io/immutable-js/">Immutable.js</a>). This brings some nice performance optimizations. This immutable structure is then passed to <a href="https://github.com/cloudflare/doca-bootstrap-theme">doca-bootstrap-theme</a> (the default option). That's it.</p><p>This is a good compromise between ease of setup and future customization. Do you have a folder with JSON Schema files and want to quickly get <code>index.html</code>? Install <code>doca</code> and use a few commands. Do you need your own look? Fork and update <a href="https://github.com/cloudflare/doca-bootstrap-theme">doca-bootstrap-theme</a>. Do you need to create more pages, sections, or use a different framework? Just modify the app that was scaffolded by <code>doca</code>.</p><p>One of the coolest features of webpack is <a href="https://webpack.github.io/docs/hot-module-replacement.html">hot module replacement</a>. Once you save a file, you can immediately see the result in your browser. No waiting, refreshing, scrolling or lost state. It's mostly used in <a href="https://github.com/gaearon/react-hot-loader">combination with React</a>; however, <b>we use it for JSON Schemas, too</b>. Here's a demo:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4zpr3Itr5aLCIbGx5YSSNg/bcf272b6e62a3512fb6f2228918f0abd/hot-reload.gif" />
            
            </figure><p><b>It gets even better.</b> It is easy to make a mistake in your schemas. No worries! You will be immediately prompted with a descriptive error message. Once it's fixed, you can continue with your work. No need to leave your editor. Refreshing is so yesterday!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Q5FZcPHjGe6RkMfLhh6z/efe9a625c40ec214225e1c296e709850/hot-reload-error.gif" />
            
            </figure>
    <div>
      <h3>Generating Documentation: Usage</h3>
      <a href="#generating-documentation-usage">
        
      </a>
    </div>
    <p>The only prerequisite is to have Node.js v4+ on your system. Then, you can install <code>doca</code> with:</p>
            <pre><code>npm install doca -g</code></pre>
            <p><b>There are just two simple commands.</b> The first one is <code>doca init</code>:</p>
            <pre><code>doca init [-i schema_folder] [-o project_folder] [-t theme_name]</code></pre>
            <p>It goes through the current dir (or <code>schema_folder</code>), looks for <code>**/*.json</code> files, and generates <code>/documentation</code> (or <code>/project_folder</code>). This command should be used only once when you need to bootstrap your project.</p><p>The second one is <code>doca theme</code>:</p>
            <pre><code>doca theme newTheme project</code></pre>
            <p>This gives a different theme (<code>newTheme</code>) to the <code>project</code>. It has two steps:</p><ul><li><p>It calls <code>npm install newTheme --save</code> inside of <code>project</code></p></li><li><p>It renames all <code>doca-xxx-theme</code> references to <code>doca-newTheme-theme</code></p></li></ul><p><b>This can make destructive changes in your project.</b> Always use version control!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3aUzRhuypOcLAfra36XNK/0f946219832547762a1cafd23a155efe/15551695380_624f6d63a0_z.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by/2.0/">CC BY 2.0</a> <a href="https://www.flickr.com/photos/29233640@N07/15551695380/in/photolist-pGfvdY-6vJiUQ-gcbx2-gcbAN-gcbtS-gcbqv-nMSXLP-3cFn8d-aAHLji-8qrbWP-nMXwba-qjNLUc-sHC6yq-ixZsvd-qitLjM-ytLaZ-mHXYrK-7jbGNc-a5Rqgb-7ANSGi-o3j7oU-J72L-7TxmXk-eSegHr-5m5Ezm-oTPimo-yabKR-9GBrGo-y5G1x-y5G76-y5GmQ-8DkkKh-y5Ged-cC7bYU-eaRLKr-cyo7c7-7NuihW-9GBqvW-9GBrKb-67Xu2w-9GBqJA-eaRHDe-7hbE7h-rpx2x7-bnz5ki-8N8Q1T-azZwBp-azZCZe-5oS1A8-8RqKno">image</a> by <a href="https://www.flickr.com/photos/29233640@N07/">Robert Couse-Baker</a></p>
    <div>
      <h4>Getting started</h4>
      <a href="#getting-started">
        
      </a>
    </div>
    <p>The best way how to start is to try our example. It includes two JSON Schemas.</p>
            <pre><code>git clone git@github.com:cloudflare/doca.git
cd doca/example
doca init
cd documentation
npm install
npm start
open http://localhost:8000</code></pre>
            <p><b>That's it!</b> This results in a development environment where you can make quick changes in your schemas and see the effects immediately because of mighty hot reloading.</p><p>You can build a static <b>production ready app</b> with:</p>
            <pre><code>npm run build
open build/index.html</code></pre>
            <p>Or you can build it with <b>no JavaScript</b> using:</p>
            <pre><code>npm run build:nojs
open build/index.html</code></pre>
            <p>Do you need to <b>add more schemas</b> or change their order? Edit the file <code>/schema.js</code>.Do you want to change the generic page title or make <code>curl</code> examples nicer? Edit the file <code>/config.js</code>.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>We're open sourcing a set of libraries that can help you develop and ship a rich RESTful API documentation. We are happy for any feedback and can't wait to see new themes created by the open source community. Please, gives us a <a href="https://github.com/cloudflare/doca">star on GitHub</a>. Also, if this work interests you, then you should come <a href="https://careers.jobscore.com/careers/cloudflare/jobs/senior-front-end-engineer-cI9kn86-ir4z5yiGakhP3Q">join our team</a>!</p> ]]></content:encoded>
            <category><![CDATA[API]]></category>
            <category><![CDATA[Programming]]></category>
            <category><![CDATA[Best Practices]]></category>
            <guid isPermaLink="false">72rB72yU768xD25jEXrwkc</guid>
            <dc:creator>Vojtech Miksu</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing the p0f BPF compiler]]></title>
            <link>https://blog.cloudflare.com/introducing-the-p0f-bpf-compiler/</link>
            <pubDate>Tue, 02 Aug 2016 14:01:15 GMT</pubDate>
            <description><![CDATA[ Two years ago we blogged about our love of BPF (BSD packet filter) bytecode. Today we are very happy to open source another component of the bpftools: our p0f BPF compiler! ]]></description>
            <content:encoded><![CDATA[ <p>Two years ago we blogged about our love of <a href="/bpf-the-forgotten-bytecode/">BPF (BSD packet filter)</a> bytecode.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7pVXfGj2thB7lm7mXwf75k/86067d4726e37e8bddb37ea1e07fe6e3/13488404_e45bf52f98_z.jpg" />
            
            </figure><p><a href="https://creativecommons.org/licenses/by/2.0/">CC BY 2.0</a> <a href="https://www.flickr.com/photos/rocketjim54/13488404/in/photolist-6UV6AL-cNg6EC-bCTyGc-pHpUZt-8mgA4n-pZV4Qq-dJTkkr-ckZqwA-dJTkhB-q2Kyas-cLvVP1-2c8CG-a5JDy8-6NSXFW-73SFAD-9JGikG-6NNM2t-6mGTaN-eHCuuX-6NSXLC-6mH8fQ">image</a> by <a href="https://www.flickr.com/photos/rocketjim54/">jim simonson</a></p><p>Then we published a set of utilities we are using to generate the BPF rules for our production iptables: <a href="/introducing-the-bpf-tools/">the bpftools</a>.</p><p>Today we are very happy to open source another component of the bpftools: our <b>p0f BPF compiler</b>!</p>
    <div>
      <h3>Meet the p0f</h3>
      <a href="#meet-the-p0f">
        
      </a>
    </div>
    <p><a href="http://lcamtuf.coredump.cx/p0f3/">p0f</a> is a tool written by superhuman <a href="https://en.wikipedia.org/wiki/Micha%C5%82_Zalewski">Michal Zalewski</a>.The main purpose of p0f is to passively analyze and categorize arbitrary network traffic. You can feed p0f any packet and in return it will derive knowledge about the operating system that sent the packet.</p><p>One of the features that caught our attention was the concise yet explanatory signature format used to describe TCP SYN packets.</p><p>The p0f SYN signature is a simple string consisting of colon separated values. This string cleanly describes a SYN packet in a human-readable way. The format is pretty smart, skipping the varying TCP fields and keeping focus only on the essence of the SYN packet, extracting the interesting bits from it.</p><p>We are using this on daily basis to categorize the packets that we, at CloudFlare, see when we are a target of a SYN flood. To defeat SYN attacks we want to discriminate the packets that are part of an attack from legitimate traffic. One of the ways we do this uses p0f.</p><p>We want to rate limit attack packets, and in effect prioritize processing of <i>other</i>, hopefully legitimate, ones. The p0f SYN signatures give us a language to describe and distinguish different types of SYN packets.</p><p>For example here is a typical p0f SYN signature of a Linux SYN packet:</p>
            <pre><code>4:64:0:*:mss*10,6:mss,sok,ts,nop,ws:df,id+:0</code></pre>
            <p>while this is a Windows 7 one:</p>
            <pre><code>4:128:0:*:8192,8:mss,nop,ws,nop,nop,sok:df,id+:0</code></pre>
            <p>Not getting into details yet, but you can clearly see that there are differences between these operating systems. Over time we noticed that the attack packets are often different. Here are two examples of attack SYN packets:</p>
            <pre><code>4:255:0:0:*,0::ack+,uptr+:0
4:64:0:*:65535,*:mss,nop,ws,nop,nop,sok:df,id+:0</code></pre>
            <p>You can have a look at more signatures in p0f's <a href="https://github.com/p0f/p0f/blob/master/docs/README">README</a> and <a href="https://github.com/p0f/p0f/blob/master/p0f.fp">signatures database</a>.</p><p>It's not <i>always</i> possible to perfectly distinguish an attack from valid packets, but very often it is. This realization led us to develop an attack mitigation tool based on p0f SYN signatures. With this we can ask <code>iptables</code> to rate limit only the selected attack signatures.</p><p>But before we discuss the mitigations, let's explain the signature format.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3UNvxctXaoSVPeP4dICkXM/83267ae5e26c6ecca088facfb127a80d/640px-12-8_equals_4-4_drum_pattern.png" />
            
            </figure><p><a href="http://creativecommons.org/licenses/by-sa/3.0/">CC BY-SA 3.0</a> <a href="https://commons.wikimedia.org/w/index.php?curid=17871154">image</a> by <a href="//en.wikipedia.org/wiki/User:Hyacinth">Hyacinth</a> at the <a href="//en.wikipedia.org/wiki/">English language Wikipedia</a></p>
    <div>
      <h2>Signature</h2>
      <a href="#signature">
        
      </a>
    </div>
    <p>As mentioned, the p0f SYN signature is a colon-separated string with the following parts:</p><ul><li><p><b>IP version</b>: the first field carries the IP version. Allowed values are <code>4</code> and <code>6</code>.</p></li><li><p><b>Initial TTL</b>: assuming that realistically a packet will not jump through more than 35 hops, we can specify an initial TTL <i>ittl</i> (usual values are <code>255</code>, <code>128</code>, <code>64</code> and <code>32</code>) and check if the packet's TTL is in the range (<i>ittl</i>, <i>ittl</i> - 35).</p></li><li><p><b>IP options length</b>: length of IP options. Although it's not that common to see options in the IP header (and so <code>0</code> is the typical value you would see in a signature), the standard defines a variable length field before the IP payload where options can be specified. A <code>*</code> value is allowed too, which means "not specified".</p></li><li><p><b>MSS</b>: maximum segment size specified in the TCP options. Can be a constant or <code>*</code>.</p></li><li><p><b>Window Size</b>: window size specified in the TCP header. It can be a expressed as:</p></li><li><p>a constant <code>c</code>, like 8192</p></li><li><p>a multiple of the MSS, in the <code>c*mss</code> format</p></li><li><p>a multiple of a constant, in the <code>%c</code> format</p></li><li><p>any value, as <code>*</code></p></li><li><p><b>Window Scale</b>: window scale specified during the three way handshake. Can be a constant or <code>*</code>.</p></li><li><p><b>TCP options layout</b>: list of TCP options in the order they are seen in a TCP packet.</p></li><li><p><b>Quirks</b>: comma separated list of unusual (e.g. ACK number set in a non ACK packet) or incorrect (e.g. malformed TCP options) characteristics of a packet.</p></li><li><p><b>Payload class</b>: TCP payload size. Can be <code>0</code> (no data), <code>+</code> (1 or more bytes of data) or <code>*</code>.</p></li></ul>
    <div>
      <h4>TCP Options format</h4>
      <a href="#tcp-options-format">
        
      </a>
    </div>
    <p>The following common TCP options are recognised:</p><ul><li><p><b>nop</b>: no-operation</p></li><li><p><b>mss</b>: maximum segment size</p></li><li><p><b>ws</b>: window scaling</p></li><li><p><b>sok</b>: selective ACK permitted</p></li><li><p><b>sack</b>: selective ACK</p></li><li><p><b>ts</b>: timestamp</p></li><li><p><b>eol+x</b>: end of options followed by <code>x</code> bytes of padding</p></li></ul>
    <div>
      <h4>Quirks</h4>
      <a href="#quirks">
        
      </a>
    </div>
    <p>p0f describes a number of quirks:</p><ul><li><p><b>df</b>: don't fragment bit is set in the IP header</p></li><li><p><b>id+</b>: df bit is set and IP identification field is non zero</p></li><li><p><b>id-</b>: df bit is not set and IP identification is zero</p></li><li><p><b>ecn</b>: explicit congestion flag is set</p></li><li><p><b>0+</b>: reserved ("must be zero") field in IP header is not actually zero</p></li><li><p><b>flow</b>: flow label in IPv6 header is non-zero</p></li><li><p><b>seq-</b>: sequence number is zero</p></li><li><p><b>ack+</b>: ACK field is non-zero but ACK flag is not set</p></li><li><p><b>ack-</b>: ACK field is zero but ACK flag is set</p></li><li><p><b>uptr+</b>: URG field is non-zero but URG flag not set</p></li><li><p><b>urgf+</b>: URG flag is set</p></li><li><p><b>pushf+</b>: PUSH flag is set</p></li><li><p><b>ts1-</b>: timestamp 1 is zero</p></li><li><p><b>ts2+</b>: timestamp 2 is non-zero in a SYN packet</p></li><li><p><b>opt+</b>: non-zero data in options segment</p></li><li><p><b>exws</b>: excessive window scaling factor (window scale greater than 14)</p></li><li><p><b>linux</b>: match a packet sent from the Linux network stack (<code>IP.id</code> field equal to <code>TCP.ts1</code> xor <code>TCP.seq_num</code>). Note that this quirk is not part of the original p0f signature format; we decided to add it since we found it useful.</p></li><li><p><b>bad</b>: malformed TCP options</p></li></ul>
    <div>
      <h2>Mitigating attacks</h2>
      <a href="#mitigating-attacks">
        
      </a>
    </div>
    <p>Given a p0f SYN signature, we want to pass it to <code>iptables</code> for mitigation. It's not obvious how to do so, but fortunately we are experienced in BPF bytecode since we are already using it to block DNS DDoS attacks.</p><p>We decided to extend our BPF infrastructure to support p0f as well, by building a tool to compile a p0f SYN signature into a BPF bytecode blob, which got incorporated into the bpftools project.</p><p>This allows us to use a simple and human readable syntax for the mitigations - the p0f signature - and compile it to a very efficient BPF form that can be used by iptables.</p><p>With a p0f signature running as BPF in the iptables we're able to distinguish attack packets with a very high speed and react accordingly. We can either hard <code>-j DROP</code> them or do a rate limit if we wish.</p>
    <div>
      <h2>How to compile p0f to BPF</h2>
      <a href="#how-to-compile-p0f-to-bpf">
        
      </a>
    </div>
    <p>First you need to clone the <code>cloudflare/bpftools</code> GitHub repository:</p>
            <pre><code>$ git clone https://github.com/cloudflare/bpftools.git</code></pre>
            <p>Then compile it:</p>
            <pre><code>$ cd bpftools
$ make</code></pre>
            <p>With this you can run <code>bpfgen p0f</code> to generate a BPF filter that matches a p0f signature.</p><p>Here's an example where we take the p0f signature of a Linux TCP SYN packet (the one we introduced before), and by using <code>bpftools</code> we generate the BPF bytecode that will match this category of packets:</p>
            <pre><code>$ ./bpfgen p0f -- 4:64:0:*:mss*10,6:mss,sok,ts,nop,ws:df,id+:0
56,0 0 0 0,48 0 0 8,37 52 0 64,37 0 51 29,48 0 0 0,
84 0 0 15,21 0 48 5,48 0 0 9,21 0 46 6,40 0 0 6,
69 44 0 8191,177 0 0 0,72 0 0 14,2 0 0 8,72 0 0 22,
36 0 0 10,7 0 0 0,96 0 0 8,29 0 36 0,177 0 0 0,
80 0 0 39,21 0 33 6,80 0 0 12,116 0 0 4,21 0 30 10,
80 0 0 20,21 0 28 2,80 0 0 24,21 0 26 4,80 0 0 26,
21 0 24 8,80 0 0 36,21 0 22 1,80 0 0 37,21 0 20 3,
48 0 0 6,69 0 18 64,69 17 0 128,40 0 0 2,2 0 0 1,
48 0 0 0,84 0 0 15,36 0 0 4,7 0 0 0,96 0 0 1,
28 0 0 0,2 0 0 5,177 0 0 0,80 0 0 12,116 0 0 4,
36 0 0 4,7 0 0 0,96 0 0 5,29 0 1 0,6 0 0 65536,
6 0 0 0,</code></pre>
            <p>If this looks magical, use the <code>-s</code> flag to see the explanation on what's going on:</p>
            <pre><code>$ ./bpfgen -s p0f -- 4:64:0:*:mss*10,6:mss,sok,ts,nop,ws:df,id+:0
; ip: ip version
; (ip[8] &lt;= 64): ttl &lt;= 64
; (ip[8] &gt; 29): ttl &gt; 29
; ((ip[0] &amp; 0xf) == 5): IP options len == 0
; (tcp[14:2] == (tcp[22:2] * 10)): win size == mss * 10
; (tcp[39:1] == 6): win scale == 6
; ((tcp[12] &gt;&gt; 4) == 10): TCP data offset
; (tcp[20] == 2): olayout mss
; (tcp[24] == 4): olayout sok
; (tcp[26] == 8): olayout ts
; (tcp[36] == 1): olayout nop
; (tcp[37] == 3): olayout ws
; ((ip[6] &amp; 0x40) != 0): df set
; ((ip[6] &amp; 0x80) == 0): mbz zero
; ((ip[2:2] - ((ip[0] &amp; 0xf) * 4) - ((tcp[12] &gt;&gt; 4) * 4)) == 0): payload len == 0
;
; ipver=4
; ip and (ip[8] &lt;= 64) and (ip[8] &gt; 29) and ((ip[0] &amp; 0xf) == 5) and (tcp[14:2] == (tcp[22:2] * 10)) and (tcp[39:1] == 6) and ((tcp[12] &gt;&gt; 4) == 10) and (tcp[20] == 2) and (tcp[24] == 4) and (tcp[26] == 8) and (tcp[36] == 1) and (tcp[37] == 3) and ((ip[6] &amp; 0x40) != 0) and ((ip[6] &amp; 0x80) == 0) and ((ip[2:2] - ((ip[0] &amp; 0xf) * 4) - ((tcp[12] &gt;&gt; 4) * 4)) == 0)

l000:
    ld       #0x0
l001:
    ldb      [8]
l002:
    jgt      #0x40, l055, l003
l003:
    jgt      #0x1d, l004, l055
l004:
    ldb      [0]
l005:
    and      #0xf
l006:
    jeq      #0x5, l007, l055
l007:
    ldb      [9]
l008:
    jeq      #0x6, l009, l055
l009:
    ldh      [6]
l010:
    jset     #0x1fff, l055, l011
l011:
    ldxb     4*([0]&amp;0xf)
l012:
    ldh      [x + 14]
l013:
    st       M[8]
l014:
    ldh      [x + 22]
l015:
    mul      #10
l016:
    tax
l017:
    ld       M[8]
l018:
    jeq      x, l019, l055
l019:
    ldxb     4*([0]&amp;0xf)
l020:
    ldb      [x + 39]
l021:
    jeq      #0x6, l022, l055
l022:
    ldb      [x + 12]
l023:
    rsh      #4
l024:
    jeq      #0xa, l025, l055
l025:
    ldb      [x + 20]
l026:
    jeq      #0x2, l027, l055
l027:
    ldb      [x + 24]
l028:
    jeq      #0x4, l029, l055
l029:
    ldb      [x + 26]
l030:
    jeq      #0x8, l031, l055
l031:
    ldb      [x + 36]
l032:
    jeq      #0x1, l033, l055
l033:
    ldb      [x + 37]
l034:
    jeq      #0x3, l035, l055
l035:
    ldb      [6]
l036:
    jset     #0x40, l037, l055
l037:
    jset     #0x80, l055, l038
l038:
    ldh      [2]
l039:
    st       M[1]
l040:
    ldb      [0]
l041:
    and      #0xf
l042:
    mul      #4
l043:
    tax
l044:
    ld       M[1]
l045:
    sub      x
l046:
    st       M[5]
l047:
    ldxb     4*([0]&amp;0xf)
l048:
    ldb      [x + 12]
l049:
    rsh      #4
l050:
    mul      #4
l051:
    tax
l052:
    ld       M[5]
l053:
    jeq      x, l054, l055
l054:
    ret      #65536
l055:
    ret      #0</code></pre>
            
    <div>
      <h2>Example run</h2>
      <a href="#example-run">
        
      </a>
    </div>
    <p>For example, consider we want to block SYN packets generated by the <code>hping3</code> tool.</p><p>First, we need to recognize the p0f SYN signature. Here it is, we know that one off the top of our heads:</p>
            <pre><code>4:64:0:0:*,0::ack+:0</code></pre>
            <p>(notice: unless you use the <code>-L 0</code> option, <code>hping3</code> will send SYN packets with the ACK number set, interesting, isn't it?)</p><p>Now, we can use the bpftools to get BPF bytecode that will match the naughty packets:</p>
            <pre><code>$ ./bpfgen p0f -- 4:64:0:0:*,0::ack+:0
39,0 0 0 0,48 0 0 8,37 35 0 64,37 0 34 29,48 0 0 0,
84 0 0 15,21 0 31 5,48 0 0 9,21 0 29 6,40 0 0 6,
69 27 0 8191,177 0 0 0,80 0 0 12,116 0 0 4,
21 0 23 5,48 0 0 6,69 21 0 128,80 0 0 13,
69 19 0 16,64 0 0 8,21 17 0 0,40 0 0 2,2 0 0 3,
48 0 0 0,84 0 0 15,36 0 0 4,7 0 0 0,96 0 0 3,
28 0 0 0,2 0 0 7,177 0 0 0,80 0 0 12,116 0 0 4,
36 0 0 4,7 0 0 0,96 0 0 7,29 0 1 0,6 0 0 65536,
6 0 0 0,</code></pre>
            <p>This bytecode can then be passed to iptables:</p>
            <pre><code>$ sudo iptables -A INPUT -p tcp --dport 80 -m bpf --bytecode "39,0 0 0 0,48 0 0 8,37 35 0 64,37 0 34 29,48 0 0 0,84 0 0 15,21 0 31 5,48 0 0 9,21 0 29 6,40 0 0 6,69 27 0 8191,177 0 0 0,80 0 0 12,116 0 0 4,21 0 23 5,48 0 0 6,69 21 0 128,80 0 0 13,69 19 0 16,64 0 0 8,21 17 0 0,40 0 0 2,2 0 0 3,48 0 0 0,84 0 0 15,36 0 0 4,7 0 0 0,96 0 0 3,28 0 0 0,2 0 0 7,177 0 0 0,80 0 0 12,116 0 0 4,36 0 0 4,7 0 0 0,96 0 0 7,29 0 1 0,6 0 0 65536,6 0 0 0," -j DROP</code></pre>
            <p>And here's how it would look in iptables:</p>
            <pre><code>$ sudo iptables -L INPUT -v
Chain INPUT (policy DROP 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    6   240            tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:80match bpf 0 0 0 0,48 0 0 8,37 35 0 64,37 0 34 29,48 0 0 0,84 0 0 15,21 0 31 5,48 0 0 9,21 0 29 6,40 0 0 6,69 27 0 8191,177 0 0 0,80 0 0 12,116 0 0 4,21 0 23 5,48 0 0 6,69 21 0 128,80 0 0 13,69 19 0 16,64 0 0 8,21 17 0 0,40 0 0 2,2 0 0 3,48 0 0 0,84 0 0 15,36 0 0 4,7 0 0 0,96 0 0 3,28 0 0 0,2 0 0 7,177 0 0 0,80 0 0 12,116 0 0 4,36 0 0 4,7 0 0 0,96 0 0 7,29 0 1 0,6 0 0 65536,6 0 0 0</code></pre>
            
    <div>
      <h4>Closing words</h4>
      <a href="#closing-words">
        
      </a>
    </div>
    <p>While defending from DDoS attacks is sometimes fun, most often it's a mundane repetitive job. We are constantly working on improving our automatic DDoS mitigation system, but we do not believe there is a strong reason to keep it all secret. We want to help others fighting attacks. Maybe if we all worked together one day we could solve the DDoS problem for all.</p><p>Releasing our code <a href="https://cloudflare.github.io/">open source</a> is an important part of CloudFlare. This blog post and the p0f BPF compiler are part of our effort to open source our DDoS mitigations. We hope others affected by SYN floods will find it useful.</p><p><i>Do you enjoy playing with low level networking bits? Are you interested in dealing with some of the largest DDoS attacks ever seen?</i><i>If so you should definitely have a look at the </i><a href="https://www.cloudflare.com/join-our-team/"><i>opened positions</i></a><i> in our London, San Francisco, Singapore, Champaign (IL) and Austin (TX) offices!</i></p> ]]></content:encoded>
            <category><![CDATA[TCP]]></category>
            <category><![CDATA[Programming]]></category>
            <category><![CDATA[Best Practices]]></category>
            <guid isPermaLink="false">a4eKkNiCIb7ugKBTDIZQv</guid>
            <dc:creator>Gilberto Bertin</dc:creator>
        </item>
        <item>
            <title><![CDATA[Go coverage with external tests]]></title>
            <link>https://blog.cloudflare.com/go-coverage-with-external-tests/</link>
            <pubDate>Tue, 19 Jan 2016 18:19:59 GMT</pubDate>
            <description><![CDATA[ The Go test coverage implementation is quite ingenious: when asked to, the Go compiler will preprocess the source so that when each code portion is executed a bit is set in a coverage bitmap. ]]></description>
            <content:encoded><![CDATA[ <p>The Go test coverage implementation is <a href="https://blog.golang.org/cover">quite ingenious</a>: when asked to, the Go compiler will preprocess the source so that when each code portion is executed a bit is set in a coverage bitmap. This is integrated in the <code>go test</code> tool: <code>go test -cover</code> enables it and <code>-coverprofile=</code> allows you to write a profile to then inspect with <code>go tool cover</code>.</p><p>This makes it very easy to get unit test coverage, but <b>there's no simple way to get coverage data for tests that you run against the main version of your program, like end-to-end tests</b>.</p><p>The proper fix would involve adding <code>-cover</code> preprocessing support to <code>go build</code>, and exposing the coverage profile maybe as a <code>runtime/pprof.Profile</code>, but as of Go 1.6 there’s no such support. Here instead is a hack we've been using for a while in the test suite of <a href="/tag/rrdns/">RRDNS</a>, our custom Go DNS server.</p><p>We create a <b>dummy test</b> that executes <code>main()</code>, we put it behind a build tag, compile a binary with <code>go test -c -cover</code> and then run only that test instead of running the regular binary.</p><p>Here's what the <code>rrdns_test.go</code> file looks like:</p>
            <pre><code>// +build testrunmain

package main

import "testing"

func TestRunMain(t *testing.T) {
	main()
}</code></pre>
            <p>We compile the binary like this</p>
            <pre><code>$ go test -coverpkg="rrdns/..." -c -tags testrunmain rrdns</code></pre>
            <p>And then when we want to collect coverage information, we execute this instead of <code>./rrdns</code> (and run our test battery as usual):</p>
            <pre><code>$ ./rrdns.test -test.run "^TestRunMain$" -test.coverprofile=system.out</code></pre>
            <p>You must return from <code>main()</code> cleanly for the profile to be written to disk; in RRDNS we do that by catching SIGINT. You can still use command line arguments and standard input normally, just note that you will get two lines of extra output from the test framework.</p><p>Finally, since you probably also run unit tests, you might want to merge the coverage profiles with <a href="https://github.com/wadey/gocovmerge">gocovmerge</a> (from <a href="https://github.com/golang/go/issues/6909#issuecomment-124185553">issue #6909</a>):</p>
            <pre><code>$ go get github.com/wadey/gocovmerge
$ gocovmerge unit.out system.out &gt; all.out
$ go tool cover -html all.out</code></pre>
            <p>If finding creative ways to test big-scale network services sounds fun, know that <a href="https://www.cloudflare.com/join-our-team/">we are hiring in London, San Francisco and Singapore</a>.</p> ]]></content:encoded>
            <category><![CDATA[RRDNS]]></category>
            <category><![CDATA[DNS]]></category>
            <category><![CDATA[Reliability]]></category>
            <category><![CDATA[Programming]]></category>
            <category><![CDATA[Go]]></category>
            <category><![CDATA[Best Practices]]></category>
            <guid isPermaLink="false">285AT2igoBQiRZFlDLnGsZ</guid>
            <dc:creator>Filippo Valsorda</dc:creator>
        </item>
        <item>
            <title><![CDATA[A Go Gotcha: When Closures and Goroutines Collide]]></title>
            <link>https://blog.cloudflare.com/a-go-gotcha-when-closures-and-goroutines-collide/</link>
            <pubDate>Wed, 25 Mar 2015 16:23:01 GMT</pubDate>
            <description><![CDATA[ Here's a small Go gotcha that it's easy to fall into when using goroutines and closures. Here's a simple program that prints out the numbers 0 to 9. ]]></description>
            <content:encoded><![CDATA[ <p>Here's a small Go gotcha that it's easy to fall into when using goroutines and closures. Here's a simple program that prints out the numbers 0 to 9:</p><p>(You can play with this in the Go Playground <a href="https://play.golang.org/p/dLfrQ7JCf5">here</a>)</p>
            <pre><code>package main

import "fmt"

func main() {
	for i := 0; i &lt; 10; i++ {
		fmt.Printf("%d ", i)
	}
}</code></pre>
            <p>It's output is easy to predict:</p>
            <pre><code>0 1 2 3 4 5 6 7 8 9</code></pre>
            <p>If you decided that it would be nice to run those <code>fmt.Printf</code>s concurrently using goroutines you might be surprised by the result. Here's a version of the code that runs each <code>fmt.Printf</code> in its own goroutine and uses a <code>sync.WaitGroup</code> to wait for the goroutines to terminate.</p>
            <pre><code>package main

import (
	"fmt"
    "runtime"
	"sync"
)

func main() {
	runtime.GOMAXPROCS(runtime.NumCPU())
    
	var wg sync.WaitGroup
	for i := 0; i &lt; 10; i++ {
		wg.Add(1)
		go func() {
			fmt.Printf("%d ", i)
			wg.Done()
		}()
	}
	
	wg.Wait()
}</code></pre>
            <p>(This code is in the Go Playground <a href="https://play.golang.org/p/VmW3H-xqsz">here</a>). If you're thinking concurrently then you'll likely predict that the output will be the numbers 0 to 9 in some random order depending on precisely when the 10 goroutines run.</p><p>But the output is actually:</p>
            <pre><code>10 10 10 10 10 10 10 10 10 10</code></pre>
            <p>Why?</p><p>Because each of those goroutines is sharing the single variable <code>i</code> across the ten closures generated by the <code>func()</code> used for each goroutine.</p><p>The output from the goroutines will depend on the value of <code>i</code> when they start running. In the example, above they didn't actually start running until the loop had terminated and <code>i</code> had the value 10.</p><p>This programmer error can have other weird effects depending on the variable that's being shared across the goroutine closures.</p><p>To solve this the simplest solution is to create a new variable, a parameter to the <code>func()</code> and pass <code>i</code> into the function call. Like this:</p>
            <pre><code>package main

import (
	"fmt"
    "runtime"
	"sync"
)

func main() {
	runtime.GOMAXPROCS(runtime.NumCPU())
    
	var wg sync.WaitGroup
	for i := 0; i &lt; 10; i++ {
		wg.Add(1)
		go func(i int) {
			fmt.Printf("%d ", i)
			wg.Done()
		}(i)
	}
	
	wg.Wait()
}</code></pre>
            <p>(The code for that is <a href="https://play.golang.org/p/IeFgq5CNOk">here</a>). That works correctly.</p><p>This is such a common gotcha that it's also in the <a href="https://golang.org/doc/faq#closures_and_goroutines">FAQ</a>.</p> ]]></content:encoded>
            <category><![CDATA[Go]]></category>
            <category><![CDATA[Programming]]></category>
            <category><![CDATA[Best Practices]]></category>
            <guid isPermaLink="false">foArcj350vrx8nPPrpmft</guid>
            <dc:creator>John Graham-Cumming</dc:creator>
        </item>
        <item>
            <title><![CDATA[Keeping passwords safe by staying up to date]]></title>
            <link>https://blog.cloudflare.com/keeping-passwords-safe-by-staying-up-to-date/</link>
            <pubDate>Sun, 17 Jun 2012 22:08:00 GMT</pubDate>
            <description><![CDATA[ Over the last few weeks a number of companies have seen their password databases leaked onto the web and found that despite having made some effort to protect them many of the passwords were easily uncovered.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Image credit: <a href="http://www.flickr.com/photos/jantik/">jantik</a></p><p>Over the last few weeks a number of companies have seen their password databases leaked onto the web and found that despite having made some effort to protect them many of the passwords were easily uncovered. Unfortunately, the disclosure of password databases is an ugly reality of the Internet; entire forums are dedicated to hackers who collaborate to uncover passwords from files and specialized password cracking software is easy to obtain.</p><p>To understand password storage it's best to go back to basics and some history.</p>
    <div>
      <h3>Plain</h3>
      <a href="#plain">
        
      </a>
    </div>
    <p>The simplest way to store a password is just to store it in a database. When a customer tries to log in and types in the password 'supersecret' that string is compared with the password in the database and the customer is or is not allowed in.</p><p>Of course, storing passwords in the clear (or in plain text) is very dangerous. If the database is compromised then the passwords can be read and every account can be broken into. Despite this danger there are many companies that store passwords in plain text. Some attempt to encrypt the password and then decrypt it when you log in. Although that's slightly better than a plain text password in the database, it only adds a small hurdle for a hacker: they just have to take the database and the encryption key and since the key is almost certainly on the same machine as the database it becomes trivial to do.</p><p>Despite the poor security offered by encrypted or plain text passwords, many companies still use them. One sure fire way to find out whether a site you are using does this is to ask for a password reset: if the company is able to email you your old password then it was stored insecurely.</p>
    <div>
      <h3>Hashed</h3>
      <a href="#hashed">
        
      </a>
    </div>
    <p>If you're following along and are new to password security you may be asking yourself: how can you test someone's password when they want to log in if you don't store it in some way? It does seem like an unsovable conundrum until you discover the <a href="http://en.wikipedia.org/wiki/Cryptographic_hash_function">cryptographic hash function</a> (which I'll just shorten to hash function).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3I9diRi82AznyCXUGRjj7S/200302ca32095c9f9bee250e66da1975/Screen_Shot_2012-06-15_at_3.50.45_PM.png.scaled500.png" />
            
            </figure><p>A hash function takes some string (such as a password) and turns it into a long number. In doing so it ensures two things: it's not possible to do the reverse (you can't take the number and run the algorithm backwards to get the string) and the number it generates is unique (i.e. there are no two strings that have the same number).</p><p>(Aside: I've simplified things a little in the previous paragraph. "not possible" should really be "infeasible" (i.e. you'd need to have more computers than there are on the planet to find the string) and "unique" should be "vanishingly improbable that two different strings will have the same number").</p><p>Hash functions work by taking the string to be hashed and scrambling the bits over and again to produce a number. One popular hash function is <a href="http://en.wikipedia.org/wiki/SHA-1">SHA-1</a>. The SHA-1 hash of the password 'supersecret' is a761ce3a45d97e41840a788495e85a70d1bb3815 (the numbers are so long that they are typically written like this in hexadecimal instead of decimal. In decimal that number is 955,582,595,971,963,915,918,670,633,711,507,401,334,868,097,045). The SHA-1 hash of 'Supersecret' (note that capital S) is 1b417472fc8e2a0a4d44ed43f874309ca4069099 (as you can see it's totallydifferent).</p><p>Hash functions are used for many purposes such as checking that the contents of a file haven't changed. When you download a file from the Internet its hash might also be sent so that your computer can check that no bits in the file have been accidentally flipped in transmission.</p><p>Hash functions are also often used in password systems because instead of storing the password, you can simple store the hash. Since the hash can't be easily reversed the stored hash is a secure way of keeping the password. When a visitor comes to the site the hash of the password they entered is calculated and compared with the hash in the database. Since the hashes are unique they'll only be able to log in with the right password.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/AF44SxKmalUxy89njvAHn/acbbb6d689df180b42c2009874897c98/6309013551_ddb45d5108_n.jpeg.scaled500.jpg" />
            
            </figure><p>Image credit: <a href="http://www.flickr.com/photos/togawanderings/">ToGa Wanderings</a></p><p>Unfortunately, simply using a hash function like this is dangerous. Over the last few weeks a number of prominent Internet companies have found that their password databases have been cracked even though they 'hashed' their passwords. To see why, try Googling <a href="https://www.google.co.uk/search?sugexp=chrome,mod=3&amp;sourceid=chrome&amp;ie=UTF-8&amp;q=a761ce3a45d97e41840a788495e85a70d1bb3815">a761ce3a45d97e41840a788495e85a70d1bb3815</a>. You might be surprised to find that the first result tells you that that's the SHA-1 hash of 'supersecret'.</p><p>The problem with simple hash functions is that hackers simply get a dictionary and compute all the hashes of all the possible passwords made from the dictionary. These massive databases of precomputed hashes are called <a href="http://en.wikipedia.org/wiki/Rainbow_table">rainbow tables</a>. If a password database leaks then the hackers just look up the hashes in the rainbow table. The hashes that aren't found in the rainbow table correspond to those users who created long, complex passwords that weren't precomputed in this way. That's one reason why picking a long, complex password matters: hackers won't have already computed its hash.</p><p>Even though the hash function itself couldn't be reversed, it was possible to create a table of precomputed password hashes (especially for poorly chosen passwords).</p>
    <div>
      <h3>Salted</h3>
      <a href="#salted">
        
      </a>
    </div>
    <p>The way around rainbow tables is with something called salt. Let's suppose you've picked the password 'supersecret' and company X is going to use SHA-1 to hash the password. Instead of simply hashing the password, company X picks a random salt (a random string of characters) that's unique to you (such as '$f2%38h##f23'). Instead of computing SHA-1(supersecret) they compute SHA-1(supersecret$f2%38h##f23) and get 33438b91ce09e695923 2f698b7939e6ee1d0712a. Try Googling that and you won't get <a href="https://www.google.co.uk/#hl=en&amp;safe=active&amp;output=search&amp;sclient=psy-ab&amp;q=33438b91ce09e6959232f698b7939e6ee1d0712a&amp;oq=33438b91ce09e6959232f698b7939e6ee1d0712a&amp;aq=f&amp;aqi=&amp;aql=&amp;gs_l=hp.3...981.981.0.1798.1.1.0.0.0.0.72.72.1.1.0...0.0.5sGOYG16PtI&amp;pbx=1&amp;bav=on.2,or.r_gc.r_pw.r_cp.r_qf.,cf.osb&amp;fp=e7a0b0c2ba4b400&amp;biw=1372&amp;bih=706">anyresults</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6y8omyVJs6bU4Jb4hqfBAr/81a0900fc739886e64bbe3ed914abebe/4377164898_b72c763811_m.jpeg.scaled500.jpg" />
            
            </figure><p>Image credit: <a href="http://www.flickr.com/photos/stlbites/">stlbites</a></p><p>Since each user has some random salt applied to the hash, rainbow tables are useless. It's not possible to precompute the hashes of all the possible passwords with all the possible salt values.</p><p>Until recently a 'salted hash' like this was how CloudFlare stored user passwords.</p><p>Unfortunately, password cracking techniques benefit enormously from two things: <a href="http://en.wikipedia.org/wiki/Moore's_law">Moore's Law</a> and the speed of hash functions. Hash functions weren't originally designed for protecting passwords, they were designed to check the integrity of data by detecting changes (notice how just changing from s to S in supersecret dramatically changed the SHA-1 hash above) and for that reason they were designed to be fast, very fast.</p><p>As computers have increased in speed with Moore's Law the speed of hash functions has made it possible to do away with rainbow tables and start attacking passwords directly even when salted. When a password database leaks, password cracking software is able to compute millions of passwords per second applying the unique salt to each password and checking the resulting hash value. The software literally tries out combinations of words and letters and computes the hash for each one.</p><p>That means that only long, complex passwords are safe with a saltedhash.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1WLaFsgoZYDOL3oj2ugWE0/e31e53d43485149a137cd23275195948/3777191143_0bc8d8e9d1_n.jpeg.scaled500.jpg" />
            
            </figure><p>Image credit: <a href="http://www.flickr.com/photos/4nitsirk">4nitsirk</a></p><p>The solution is to use a hash function that's slow. If the hash function itself is slow then it slows down cracking software. If the speed can be chosen so that over time the hash function can be made slower, then the hash function can be slowed down so that password cracking doesn't get easier.</p>
    <div>
      <h3>Future Proof</h3>
      <a href="#future-proof">
        
      </a>
    </div>
    <p>Happily, hash functions with just that property have been invented specifically to help keep passwords safe. We recently upgraded our entire password database to use <a href="http://en.wikipedia.org/wiki/Bcrypt">bcrypt</a>. bcrypt is just like a normal hash function but it has an additional parameter: as well as being fed the password and some random salt it's fed a cost. The cost tells the hash function how hard to work in computing the hash (and thus determines how long it will take).</p><p>Over time the cost can be increased (it's just a number) to keep pace with faster and faster computers and keep passwords safe by making the hash function slower and slower.</p><p>Just like all aspects of security, password storage needs to be reviewed from time to time. As we've seen recently many companies don't take the time to upgrade their password security leading to serious problems.</p><p>And, of course, users can help out too: password cracking relies partly on the algorithms used to store the passwords and partly on the complexity of the password. Make sure to choose a long, complex password and don't use it on any other site.</p> ]]></content:encoded>
            <category><![CDATA[Salt]]></category>
            <category><![CDATA[Best Practices]]></category>
            <guid isPermaLink="false">5HtUzTjeN5m6I8LtyhOCz8</guid>
            <dc:creator>John Graham-Cumming</dc:creator>
        </item>
        <item>
            <title><![CDATA[CloudFlare Tips: Recommended steps after activating through a partner]]></title>
            <link>https://blog.cloudflare.com/cloudflare-tips-recommended-steps-after-activ/</link>
            <pubDate>Mon, 16 Apr 2012 19:04:00 GMT</pubDate>
            <description><![CDATA[ CloudFlare has partnered with a number of CloudFlare Certified Partners to make it simple for website owners that want a faster and safer website.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>CloudFlare has partnered with a number of <a href="https://www.cloudflare.com/hosting-partners">CloudFlare Certified Partners</a> to make it simple for website owners that want a <a href="https://www.cloudflare.com/features-cdn">faster</a> and <a href="https://www.cloudflare.com/features-security">safer</a> website. Since signing up for CloudFlare through a hosting partner is different than signing up for CloudFlare directly, we wanted to provide some quick tips to help you get the most out of your CloudFlare experience.</p>
    <div>
      <h3>Things you should know about right away</h3>
      <a href="#things-you-should-know-about-right-away">
        
      </a>
    </div>
    <ol><li><p>You do not need to change your name servers when activating through a hosting partner. You would still manage your DNS entries at your hosting provider or registrar.</p></li><li><p>CloudFlare can only be enabled for CNAME records when activating through a hosting partner. To enable CloudFlare on your root domain (yourdomain.com), which is an A record, you need to have your hosting partner set a <a href="http://cloudflare.tenderapp.com/kb/adding-sites-cloudflare/how-do-i-handle-a-301-redirect">301 redirect</a> from your root domain to <a href="http://www">www</a>. Not only will the redirect help accelerate and protect the root domain, this will also make the statistics in your CloudFlare account accurate. Note: If you have a naked domain, 'yourdomain.com', and you don't want your visitors to go to '<a href="http://www.yourdomain.com">www.yourdomain.com</a>', then you need to <a href="https://www.cloudflare.com/sign-up-new">signup directly with CloudFlare</a>.</p></li><li><p>What you should do if you see any of the following error messages after enabling CloudFlare:</p></li></ol><p>"Host Not Configured to Serve Web Traffic" error message will appear on the first request to your site after activating through a partner, then will go away after a few minutes. If it lasts for more than 10 minutes,then contact your hosting provider and our support teams will work together to resolve.</p><p>"<a href="https://support.cloudflare.com/entries/22052913-why-am-i-getting-a-gateway-error">CloudFlare-nginx 502 Bad Gateway</a>": This is an issue on the CloudFlare network. We deal with these quickly (less than 10 minutes). We publish all announcements regarding our network status on <a href="https://twitter.com/#!/CloudFlareSys">@CloudFlareSys</a>.</p><p>"<a href="https://support.cloudflare.com/entries/22036452-my-website-is-offline-or-unavailable">Website is Unavailable</a>": Either your server is offline and we don't have a copy of your site in cache <i><b>or</b></i> something on the origin server is blocking <a href="https://www.cloudflare.com/ips">CloudFlare's IPs</a>.</p><p>If your server is online, then work with your hosting provider to find out what could be blocking CloudFlare's IPs on your server. The most common culprit is a security solution like a firewall like CSF or IP tables. As soon as the block is removed, the error page will disappear.</p>
    <div>
      <h3>Key CloudFlare features</h3>
      <a href="#key-cloudflare-features">
        
      </a>
    </div>
    <p><i><b>SSL</b></i>If you have SSL on the domain(s), you will need to upgrade to a <a href="https://www.cloudflare.com/plans">Pro account</a>. The cost for a Pro account is $20.00 per month for the first website and $5.00 for each additional site. In addition to the SSL support, you will also receive additional <a href="https://www.cloudflare.com/plans">security and performance</a> benefits.</p><p>Note: You will find the option to upgrade to Pro in your CloudFlare account.</p><p><i><b>Development Mode</b></i>If you are making changes to the <a href="https://support.cloudflare.com/entries/22037282-what-file-extensions-does-cloudflare-cache-for-static-content">static content</a> on your website, temporarily bypass CloudFlare's cache so any changes appear immediately. You can find Development Mode either right in your hosting provider's control panel or by logging in to your CloudFlare account under CloudFlare Settings.</p><p><i><b>PageRules</b></i>PageRules gives you more powerful performance and configuration options, including:<a href="/introducing-pagerules-advanced-caching">Advanced Caching Configurations</a><a href="/introducing-pagerules-fine-grained-feature-co">Excluding URLS from CloudFlare's default caching and security options</a><a href="/introducing-pagerules-url-forwarding">Setting URL forwards and redirects</a></p>
    <div>
      <h3>Recommended (Free!) Optional CloudFlare Features</h3>
      <a href="#recommended-free-optional-cloudflare-features">
        
      </a>
    </div>
    <p>CloudFlare has developed web content optimization features called <a href="/we-have-lift-off-rocket-loader-ga-is-mobile/">Rocket Loader</a> and <a href="/an-all-new-and-improved-autominify">Auto Minify</a>. Both Rocket Loader and Auto Minify are designed to load your site's resources even faster than the default CloudFlare configuration.</p><p><i><b>Rocket Loader</b></i>Rocket Loader will speed up the delivery of your pages by automatically asynchronously loading your JavaScript resources. Rocket Loader works well for websites that have a lot of ads, widgets or plugins.</p><p><i><b>Auto Minify</b></i>Removes all unnecessary characters from HTML, <a href="https://www.cloudflare.com/learning/performance/how-to-minify-css/">CSS</a>, and JavaScript to reduce file size.</p><p>Note: Both of these features are still in beta. If you encounter any issues, such as a broken plugin or JavaScript not working properly, then please turn the feature off and <a href="https://www.cloudflare.com/wco-bug-report.html">report any bugs</a> to our team.</p><p>To turn on Rocket Loader and Auto Minify, you need to log in to your CloudFlare account and go to CloudFlare Settings.</p><p><i><b>IPv6 Gateway</b></i>Make your website IPv6 compatible, by turning on the CloudFlare IPv6 gateway.</p>
    <div>
      <h3>Where you can find out more about CloudFlare</h3>
      <a href="#where-you-can-find-out-more-about-cloudflare">
        
      </a>
    </div>
    <p>The <a href="http://support.cloudflare.com/">CloudFlare Support Center</a> has answers to a number of questions. Searching our knowledge base is the fastest way to get a quick response to the majority of questions. Don't see the answer to your question? Please <a href="http://support.cloudflare.com/">contact CloudFlare</a>.</p>
    <div>
      <h3>Updates and Giveways</h3>
      <a href="#updates-and-giveways">
        
      </a>
    </div>
    <p>We frequently post about product updates, early beta access to new features, system issues, and giveaways, so we recommend that you follow us on Facebook, Twitter or Google+:</p><ul><li><p><a href="https://www.facebook.com/CloudFlare">Facebook</a></p></li><li><p><a href="http://twitter.com/cloudflare">Twitter</a></p></li><li><p><a href="https://plus.google.com/100611700350554803650/">Google+</a></p></li></ul><p>Thank you for joining CloudFlare in <a href="https://www.cloudflare.com/cloudflare-partners-self-serve-program-open-beta/">partnership with your hosting provider.</a></p> ]]></content:encoded>
            <category><![CDATA[Onboarding]]></category>
            <category><![CDATA[Best Practices]]></category>
            <category><![CDATA[IPv6]]></category>
            <category><![CDATA[Rocket Loader]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <guid isPermaLink="false">2tnfWSXGjuZr9P0JsFjMWu</guid>
            <dc:creator>Damon Billian</dc:creator>
        </item>
    </channel>
</rss>