Tuesday, October 12, 2021

Deploying HTTP/3 on Windows Server at Scale

Windows Server 2022 was released for general availability last month. Since then, in cooperation with the Microsoft 365 team, we have started deploying the latest Windows Server on Exchange Online service front door servers globally, with a primary goal of adding support for HTTP/3 to https://outlook.office.com. We have only scaled the deployment up to 20% of capacity of front-end servers so far, but the data we are getting back is looking great!

 

nibanks_4-1634066443511.png

 

Total requests per second (RPS) have steadily increased in-step with the increased deployment to Exchange Online service front doors. Now that deployment is at 20% of capacity, we are seeing RPS spike nearly to 50k per second. Throughout the deployment we have been tracking the last-mile request latencies. The last-mile request latency is the time spent between the client and the front-end server; essentially request time minus any back-end communication and processing. Exchange Online service front doors support small request, small response workloads for various SPAs (Single Page Applications) such as Outlook on the Web where responsiveness is a key differentiator for user experience.

 

nibanks_5-1634066443517.png

 

The latency data coming back has held firm throughout the deployment. As seen in the last-mile HTTP request latency data above, HTTP/3 is giving huge gains for Microsoft 365. An 8% reduction from the baseline at the 50th percentile is not bad, but over a 60% reduction at P99.9 is huge!

 

nibanks_6-1634066443519.png

Why is HTTP/3 Better?

You will immediately ask why HTTP/3 has so much lower latency. Well, there are a lot of factors at play, most of which come from replacing TCP/TLS layers with QUIC.

  • QUIC reduces the number of round trips in the handshake by combining the transport (QUIC) and the security (TLS) handshakes together. One less round-trip means the HTTP request can be started (and completed) that much faster.
  • QUIC reduces/removes cross-request (stream) head of line blocking by:
    • Only retransmitting stream data that was lost in a QUIC packet. This ensures streams without packet loss continue unaffected. Parallel HTTP requests no longer must wait on earlier requests just because they lost a packet. Only the request that lost a packet suffers the extra latency, while all others continue unaffected.
    • Using first-in, first-out (FIFO) framing of sent stream data instead of round-robin. This ensures completion of a request is not delayed by the payload of any other request. In previous versions of HTTP, Windows used a round-robin approach to be “fair” to all requests. In HTTP/3 we moved to the FIFO model to reduce overall latency, completing the requests as fast as possible, as they come.
    • Encrypting on a per-packet boundary (instead of 16KB per-TLS message). HTTP over TCP/TLS encrypts data in larger chunks, to reduce CPU costs, but this also has the effect of delaying the decryption of the entire chunk when a single packet is lost. QUIC pays the marginally higher CPU cost to make all packets independent.
  • QUIC builds/improves on TCP loss recovery by:
    • Eliminating the retransmission ambiguity in TCP by not reusing packet numbers.
    • Updating probe timeout (PTO) logic for quickly transmitting on suspected loss. Much of this logic exists in TCP, but QUIC takes the latest learnings and puts them all together for the best results.
    • Removing the selective acknowledgement (SACK) limitation of TCP (max of 3 SACK ranges) to allow for QUIC to accurately acknowledge received packets in the face of sporadic, non-continuous packet loss. This results in QUIC utilizing available bandwidth more efficiently because it does not unnecessarily retransmit data the peer already received.
  • QUIC uses pacing when sending packets.
    • This reduces the burst sizes of traffic onto the network, generally reducing packet loss.

To sum all this up, QUIC makes parallel work independent, improves the speed and accuracy of loss detection, and generally tries to be a better citizen on the network. As you can see from the data above, this results in HTTP/3 proving huge gains in responsiveness for Microsoft 365!

 

Deployment of HTTP/3 support in Exchange Online is the latest effort undertaken by the Microsoft 365 team to improve end-user experience via network-level engineering. Customers who have aligned with the Microsoft 365 Principles of Network Connectivity will experience the most benefit from this improvement and others to come. If you are a Microsoft 365 tenant administrator, you can find guidance on how to improve your connectivity in the Microsoft 365 admin center.

 

Trying Out HTTP/3 on Your Server

If you are currently on an older Windows Server using HTTP and want to try out HTTP/3 please take a look here. If you are using .NET instead, you can try HTTP/3 with the ASP.NET Kestrel web server, but be aware that it is still in preview.

Posted at https://sl.advdat.com/3iRynZe