Subscribe to receive notifications of new posts:

Optimizing TLS over TCP to reduce latency

2016-06-10

3 min read

The layered nature of the Internet (HTTP on top of some reliable transport (e.g. TCP), TCP on top of some datagram layer (e.g. IP), IP on top of some link (e.g. Ethernet)) has been very important in its development. Different link layers have come and gone over time (any readers still using 802.5?) and this flexibility also means that a connection from your web browser might traverse your home network over WiFi, then down a DSL line, across fiber and finally be delivered over Ethernet to the web server. Each layer is blissfully unaware of the implementation of the layer below it.

But there are some disadvantages to this model. In the case of TLS (the most common standard used for sending encrypted data across in the Internet and the protocol your browser uses with visiting an https:// web site) the layering of TLS on top of TCP can cause delays to the delivery of a web page.

That’s because TLS divides the data being transmitted into records of a fixed (maximum) size and then hands those records to TCP for transmission. TCP promptly divides those records up into segments which are then transmitted. Ultimately, those segments are sent inside IP packets which traverse the Internet.

In order to prevent congestion on the Internet and to ensure reliable delivery, TCP will only send a limited number of segments before waiting for the receiver to acknowledge that the segments have been received. In addition, TCP guarantees that segments are delivered in order to the application. Thus if a packet is dropped somewhere between sender and receiver it’s possible for a whole bunch of segments to be held in a buffer waiting for the missing segment to be retransmitted before the buffer can be released to the application.

TLS and TCP

What this means for TLS is that a large record that is split across multiple TCP segments can encounter unexpected delays. TLS can only handle complete records and so a missing TCP segment delays the whole TLS record.

At the start of a TCP connection as the TCP slow start occurs the record could be split across multiple segments that are delivered relatively slowly. During a TCP connection one of the segments that a TLS record has been split into may get lost causing the record to be delayed until the missing segment is retransmitted.

Thus it’s preferable to not use a fixed TLS record size but adjust the record size as the underlying TCP connection spins up (and down in the case of congestion). Starting with a small record size helps match the record size to the segments that TCP is sending at the start of a connection. Once the connection is running the record size can be increased.

CloudFlare uses NGINX to handle web requests. By default NGINX does not support dynamic TLS record sizes. NGINX has a fixed TLS record size with a default of 16KB that can be adjusted with the ssl_buffer_size parameter.

Dynamic TLS Records in NGINX

We modified NGINX to add support for dynamic TLS record sizes and are open sourcing our patch. You can find it here. The patch adds parameters to the NGINX ssl module.

ssl_dyn_rec_size_lo: the TLS record size to start with. Defaults to 1369 bytes (designed to fit the entire record in a single TCP segment: 1369 = 1500 - 40 (IPv6) - 20 (TCP) - 10 (Time) - 61 (Max TLS overhead))

ssl_dyn_rec_size_hi: the TLS record size to grow to. Defaults to 4229 bytes (designed to fit the entire record in 3 TCP segments)

ssl_dyn_rec_threshold: the number of records to send before changing the record size.

Each connection starts with records of the size ssl_dyn_rec_size_lo. After sending ssl_dyn_rec_threshold records the record size is increased to ssl_dyn_rec_size_hi. After sending an additional ssl_dyn_rec_threshold records with size ssl_dyn_rec_size_hi the record size is increased to ssl_buffer_size.

ssl_dyn_rec_timeout: if the connection idles for longer than this time (in seconds) that the TLS record size is reduced to ssl_dyn_rec_size_lo and the logic above is repeated. If this value is set to 0 then dynamic TLS record sizes are disabled and the fixed ssl_buffer_size will be used instead.

Conclusion

We hope people find our NGINX patch useful and would be very happy to hear from people who use it and/or improve it.

Cloudflare's connectivity cloud protects entire corporate networks, helps customers build Internet-scale applications efficiently, accelerates any website or Internet application, wards off DDoS attacks, keeps hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you're looking for a new career direction, check out our open positions.
TLSOptimizationSpeed & ReliabilitySecurity

Follow on X

Cloudflare|@cloudflare

Related posts

October 09, 2024 1:00 PM

Improving platform resilience at Cloudflare through automation

We realized that we need a way to automatically heal our platform from an operations perspective, and designed and built a workflow orchestration platform to provide these self-healing capabilities across our global network. We explore how this has helped us to reduce the impact on our customers due to operational issues, and the rich variety of similar problems it has empowered us to solve....

October 08, 2024 1:00 PM

Cloudflare acquires Kivera to add simple, preventive cloud security to Cloudflare One

The acquisition and integration of Kivera broadens the scope of Cloudflare’s SASE platform beyond just apps, incorporating increased cloud security through proactive configuration management of cloud services. ...