Google has developed a new TCP congestion control mechanism. This has now been applied to the Google Compute Platform, having previously been added to YouTube and Google.com.
This BBR (Bandwidth Bottleneck and Round-Trip Propagation Time) mechanism overcomes some of the challenges with other mechanisms. Google sees the challenges as being twofold, occurring in shallow buffered and deeply buffered environments. In shallow buffers, packet loss occurs before congestion. This results in Congestion collapse, a phenomenon known since the early days of the Internet. In deeper buffers, the result is bufferbloat, preventing the correct slow down of flows at the onset of congestion.
BBR removes the focus on loss-based congestion control and focuses on a reliable determination of the BDP to manage the data. BDP (Bandwidth Delay Product) is a measure of the largest amount of data in flight across the network. This delivers high throughput at medium packet-loss levels (at levels higher than the CUBIC congestion protocol in similar environments). It also manages the delay levels even when deep buffers exist. It can send smaller files quicker than the traditional TCP slow-start mechanism. This delivers consistent latency in the transfer (useful for videos); as well as sustained bandwidth utilisation at higher packet-loss levels than normal.
Linux 4.9 contains BBR already, it is the flow control within QUIC, and work is being undertaken to add it to FreeBSD. GitHub contains an implementation if you want the code. BBR is under consideration by the IETF Internet Congestion Control group (iccrg).
Placing it on Google’s main sites as most flows leaving the cloud benefit, especially YouTube’s video flows. If you’re doing bulk uploads to the cloud, using BBR might be of benefit to you to gain the most from the links you have, no matter the levels of packet loss you receive. I can think of a lot of potential areas where this will cut the time taken to move data into the cloud. Perhaps you can too?