Subscribe to receive notifications of new posts:

Replace your hardware firewalls with Cloudflare One

2021-12-06

9 min read
This post is also available in 简体中文, Deutsch, 日本語, 한국어, Indonesia, ไทย and Français.
Replace your hardware firewalls with Cloudflare One

Today, we’re excited to announce new capabilities to help customers make the switch from hardware firewall appliances to a true cloud-native firewall built for next-generation networks. Cloudflare One provides a secure, performant, and Zero Trust-enabled platform for administrators to apply consistent security policies across all of their users and resources. Best of all, it’s built on top of our global network, so you never need to worry about scaling, deploying, or maintaining your edge security hardware.

As part of this announcement, Cloudflare launched the Oahu program today to help customers leave legacy hardware behind; in this post we’ll break down the new capabilities that solve the problems of previous firewall generations and save IT teams time and money.

How did we get here?

In order to understand where we are today, it’ll be helpful to start with a brief history of IP firewalls.

Stateless packet filtering for private networks

The first generation of network firewalls were designed mostly to meet the security requirements of private networks, which started with the castle and moat architecture we defined as Generation 1 in our post yesterday. Firewall administrators could build policies around signals available at layers 3 and 4 of the OSI model (primarily IPs and ports), which was perfect for (e.g.) enabling a group of employees on one floor of an office building to access servers on another via a LAN.

This packet filtering capability was sufficient until networks got more complicated, including by connecting to the Internet. IT teams began needing to protect their corporate network from bad actors on the outside, which required more sophisticated policies.

Better protection with stateful & deep packet inspection

Firewall hardware evolved to include stateful packet inspection and the beginnings of deep packet inspection, extending basic firewall concepts by tracking the state of connections passing through them. This enabled administrators to (e.g.) block all incoming packets not tied to an already present outgoing connection.

These new capabilities provided more sophisticated protection from attackers. But the advancement came at a cost: supporting this higher level of security required more compute and memory resources. These requirements meant that security and network teams had to get better at planning the capacity they’d need for each new appliance and make tradeoffs between cost and redundancy for their network.

In addition to cost tradeoffs, these new firewalls only provided some insight into how the network was used. You could tell users were accessing 198.51.100.10 on port 80, but to do a further investigation about what these users were accessing would require you to do a reverse lookup of the IP address. That alone would only land you at the front page of the provider, with no insight into what was accessed, reputation of the domain/host, or any other information to help answer “Is this a security event I need to investigate further?”. Determining the source would be difficult here as well, it would require correlating a private IP address handed out via DHCP with a device and then subsequently a user (if you remembered to set long lease times and never shared devices).

Application awareness with next generation firewalls

To accommodate these challenges, the industry introduced the Next Generation Firewall (NGFW). These were the long reigning, and in some cases are still the industry standard, corporate edge security device. They adopted all the capabilities of previous generations while adding in application awareness to help administrators gain more control over what passed through their security perimeter. NGFWs introduced the concept of vendor-provided or externally-provided application intelligence, the ability to identify individual applications from traffic characteristics. Intelligence which could then be fed into policies defining what users could and couldn’t do with a given application.

As more applications moved to the cloud, NGFW vendors started to provide virtualized versions of their appliances. These allowed administrators to no longer worry about lead times for the next hardware version and allowed greater flexibility when deploying to multiple locations.

Over the years, as the threat landscape continued to evolve and networks became more complex, NGFWs started to build in additional security capabilities, some of which helped consolidate multiple appliances. Depending on the vendor, these included VPN Gateways, IDS/IPS, Web Application Firewalls, and even things like Bot Management and DDoS protection. But even with these features, NGFWs had their drawbacks — administrators still needed to spend time designing and configuring redundant (at least primary/secondary) appliances, as well as choosing which locations had firewalls and incurring performance penalties from backhauling traffic there from other locations. And even still, careful IP address management was required when creating policies to apply pseudo identity.

Adding user-level controls to move toward Zero Trust

As firewall vendors added more sophisticated controls, in parallel, a paradigm shift for network architecture was introduced to address the security concerns introduced as applications and users left the organization’s “castle” for the Internet. Zero Trust security means that no one is trusted by default from inside or outside the network, and verification is required from everyone trying to gain access to resources on the network. Firewalls started incorporating Zero Trust principles by integrating with identity providers (IdPs) and allowing users to build policies around user groups — “only Finance and HR can access payroll systems” — enabling finer-grained control and reducing the need to rely on IP addresses to approximate identity.

These policies have helped organizations lock down their networks and get closer to Zero Trust, but CIOs are still left with problems: what happens when they need to integrate another organization’s identity provider? How do they safely grant access to corporate resources for contractors? And these new controls don’t address the fundamental problems with managing hardware, which still exist and are getting more complex as companies go through business changes like adding and removing locations or embracing hybrid forms of work. CIOs need a solution that works for the future of corporate networks, instead of trying to duct tape together solutions that address only some aspects of what they need.

The cloud-native firewall for next-generation networks

Cloudflare is helping customers build the future of their corporate networks by unifying network connectivity and Zero Trust security. Customers who adopt the Cloudflare One platform can deprecate their hardware firewalls in favor of a cloud-native approach, making IT teams’ lives easier by solving the problems of previous generations.

Connect any source or destination with flexible on-ramps

Rather than managing different devices for different use cases, all traffic across your network — from data centers, offices, cloud properties, and user devices — should be able to flow through a single global firewall. Cloudflare One enables you to connect to the Cloudflare network with a variety of flexible on-ramp methods including network-layer (GRE or IPsec tunnels) or application-layer tunnels, direct connections, BYOIP, and a device client. Connectivity to Cloudflare means access to our entire global network, which eliminates many of the challenges with physical or virtualized hardware:

  • No more capacity planning: The capacity of your firewall is the capacity of Cloudflare’s global network (currently >100Tbps and growing).

  • No more location planning: Cloudflare’s Anycast network architecture enables traffic to connect automatically to the closest location to its source. No more picking regions or worrying about where your primary/backup appliances are — redundancy and failover are built in by default.

  • No maintenance downtimes: Improvements to Cloudflare’s firewall capabilities, like all of our products, are deployed continuously across our global edge.

  • DDoS protection built in: No need to worry about DoS attacks overwhelming your firewalls; Cloudflare’s network automatically blocks attacks close to their source and sends only the clean traffic on to its destination.

Configure comprehensive policies, from packet filtering to Zero Trust

Cloudflare One policies can be used to secure and route your organizations traffic across all the various traffic ramps. These policies can be crafted using all the same attributes available through a traditional NGFW while expanding to include Zero Trust attributes as well. These Zero Trust attributes can include one or more IdPs or endpoint security providers.

When looking strictly at layers 3 through 5 of the OSI model, policies can be based on IP, port, protocol, and other attributes in both a stateless and stateful manner. These attributes can also be used to build your private network on Cloudflare when used in conjunction with any of the identity attributes and the Cloudflare device client.

Additionally, to help relieve the burden of managing IP allow/block lists, Cloudflare provides a set of managed lists that can be applied to both stateless and stateful policies. And on the more sophisticated end, you can also perform deep packet inspection and write programmable packet filters to enforce a positive security model and thwart the largest of attacks.

Cloudflare’s intelligence helps power our application and content categories for our Layer 7 policies, which can be used to protect your users from security threats, prevent data exfiltration, and audit usage of company resources. This starts with our protected DNS resolver, which is built on top of our performance leading consumer 1.1.1.1 service. Protected DNS allows administrators to protect their users from navigating or resolving any known or potential security risks. Once a domain is resolved, administrators can apply HTTP policies to intercept, inspect, and filter a user's traffic. And if those web applications are self-hosted or SaaS enabled you can even protect them using a Cloudflare access policy, which acts as a web based identity proxy.

Last but not least, to help prevent data exfiltration, administrators can lock down access to external HTTP applications by utilizing remote browser isolation. And coming soon, administrators will be able to log and filter which commands a user can execute over an SSH session.

Simplify policy management: one click to propagate rules everywhere

Traditional firewalls required deploying policies on each device or configuring and maintaining an orchestration tool to help with this process. In contrast, Cloudflare allows you to manage policies across our entire network from a simple dashboard or API, or use Terraform to deploy infrastructure as code. Changes propagate across the edge in seconds thanks to our Quicksilver technology. Users can get visibility into traffic flowing through the firewall with logs, which are now configurable.

Consolidating multiple firewall use cases in one platform

Firewalls need to cover a myriad of traffic flows to satisfy different security needs, including blocking bad inbound traffic, filtering outbound connections to ensure employees and applications are only accessing safe resources, and inspecting internal (“East/West”) traffic flows to enforce Zero Trust. Different hardware often covers one or multiple use cases at different locations; we think it makes sense to consolidate these as much as possible to improve ease of use and establish a single source of truth for firewall policies. Let’s walk through some use cases that were traditionally satisfied with hardware firewalls and explain how IT teams can satisfy them with Cloudflare One.

Protecting a branch office

Traditionally, IT teams needed to provision at least one hardware firewall per office location (multiple for redundancy). This involved forecasting the amount of traffic at a given branch and ordering, installing, and maintaining the appliance(s). Now, customers can connect branch office traffic to Cloudflare from whatever hardware they already have — any standard router that supports GRE or IPsec will work — and control filtering policies across all of that traffic from Cloudflare’s dashboard.

Step 1: Establish a GRE or IPsec tunnelThe majority of mainstream hardware providers support GRE and/or IPsec as tunneling methods. Cloudflare will provide an Anycast IP address to use as the tunnel endpoint on our side, and you configure a standard GRE or IPsec tunnel with no additional steps — the Anycast IP provides automatic connectivity to every Cloudflare data center.

Step 2: Configure network-layer firewall rulesAll IP traffic can be filtered through Magic Firewall, which includes the ability to construct policies based on any packet characteristic — e.g., source or destination IP, port, protocol, country, or bit field match. Magic Firewall also integrates with IP Lists and includes advanced capabilities like programmable packet filtering.

Step 3: Upgrade traffic for application-level firewall rulesAfter it flows through Magic Firewall, TCP and UDP traffic can be “upgraded” for fine-grained filtering through Cloudflare Gateway. This upgrade unlocks a full suite of filtering capabilities including application and content awareness, identity enforcement, SSH/HTTP proxying, and DLP.

Before: deploying hardware firewalls at every branch; managing different models and sometimes different vendors. After: traffic connected to global Anycast network; firewall policies deployed in one place propagate globally

Protecting a high-traffic data center or VPC

Firewalls used for processing data at a high-traffic headquarters or data center location can be some of the largest capital expenditures in an IT team’s budget. Traditionally, data centers have been protected by beefy appliances that can handle high volumes gracefully, which comes at an increased cost. With Cloudflare’s architecture, because every server across our network can share the responsibility of processing customer traffic, no one device creates a bottleneck or requires expensive specialized components. Customers can on-ramp traffic to Cloudflare with BYOIP, a standard tunnel mechanism, or Cloudflare Network Interconnect, and process up to terabits per second of traffic through firewall rules smoothly.

Before: traffic through the data center needed to flow through expensive appliance(s) that could keep up with volume. After: traffic balanced across thousands of commodity servers; malicious traffic blocked close to its source

Protecting a roaming or hybrid workforce

In order to connect to corporate resources or get secure access to the Internet, users in traditional network architectures establish a VPN connection from their devices to a central location where firewalls are located. There, their traffic is processed before it’s allowed to its final destination. This architecture introduces performance penalties and while modern firewalls can enable user-level controls, they don’t necessarily achieve Zero Trust. Cloudflare enables customers to use a device client as an on-ramp to Zero Trust policies; watch out for more updates later this week on how to smoothly deploy the client at scale.

Before: user traffic backhauled to firewall device over VPN; after: policy enforced at the edge location closest to the user.

What’s next

We can’t wait to keep evolving this platform to serve new use cases. We’ve heard from customers who are interested in expanding NAT Gateway functionality through Cloudflare One, who want richer analytics, reporting, and user experience monitoring across all our firewall capabilities, and who are excited to adopt a full suite of DLP features across all of their traffic flowing through Cloudflare’s network. Updates on these areas and more are coming soon (stay tuned).

Cloudflare’s new firewall capabilities are available for enterprise customers today. Learn more here and check out the Oahu Program to learn how you can migrate from hardware firewalls to Zero Trust — or talk to your account team to get started today.

Cloudflare's connectivity cloud protects entire corporate networks, helps customers build Internet-scale applications efficiently, accelerates any website or Internet application, wards off DDoS attacks, keeps hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you're looking for a new career direction, check out our open positions.
CIO WeekCloudflare OneFirewallCloudflare GatewaySecurityZero Trust

Follow on X

Ankur Aggarwal|@Encore_Encore
Annika Garbers|@annikagarbers
Cloudflare|@cloudflare

Related posts

October 23, 2024 1:00 PM

Fearless SSH: short-lived certificates bring Zero Trust to infrastructure

Access for Infrastructure, BastionZero’s integration into Cloudflare One, will enable organizations to apply Zero Trust controls to their servers, databases, Kubernetes clusters, and more. Today we’re announcing short-lived SSH access as the first available feature of this integration. ...

October 08, 2024 1:00 PM

Cloudflare acquires Kivera to add simple, preventive cloud security to Cloudflare One

The acquisition and integration of Kivera broadens the scope of Cloudflare’s SASE platform beyond just apps, incorporating increased cloud security through proactive configuration management of cloud services. ...