TCP Fast Open has the potential to further reduce latency for at least some TLS connections. With TLS there's also no idempotency concerns like there are for using it with plain HTTP, as the fastopen-sent data would just be the TLS ClientHello. Our current cache cluster kernels and nginx builds support it, requiring sysctl adjustments and an nginx config change. Various concerns/work that need addressing before trying to really enable it:
- Is client-side adoption percentage high enough to matter yet? Right now probably fairly low, but growing. Linux+Chrom(e|ium) can do it by default on newer kernels, and Android 5 can do it. El Capitan Macs (Safari? Chrome?) may start trying it in the fall.
- How do we appropriately tune the limit for the outstanding fastopen queue per socket? (in nginx terms, this is the N in the fastopen=N parameter). Too high has DoS implications both for us and as a potential reflector. Too low and we may miss legit TFO due to our high SYN concurrency.
- Is LVS a factor here for TFO compatibility, and does it (or should it) pay attention to the cookies?
- We'd need to synchronize the TFO cookie key across machines in a cluster and rotate them periodically, or rely on client source IP hashing and only periodically regenerate locally per-machine. This is somewhat similar to the issues with RFC5077 secrets but considerably less security-critical; leaking it opens us to easier DoS, but does not affect TLS security of legitimate client traffic. We'd need to generate a random key and distribute it securely and periodically, etc. Everything about this is dependent on answering the LVS question above too.