We have been looking for ways to expand our cacheable content beyond anonymous requests for a long time. Once a user is logged in, a number of personalizations primarily in the chrome around the content (user name, tabs, links) make it hard to reuse a cached copy of an entire page. Initial trials to perform those personalizations with [ESI](https://en.wikipedia.org/wiki/Edge_Side_Includes) [were done as early as 2004](http://www.squid-cache.org/mail-archive/squid-users/200401/0890.html), but even with Varnish testing we have seen performance and stability issues. Server-side composition technologies like ESI or SSI also introduce a second code path, which makes it harder to test and develop front-end code without intimate knowledge of a complex part of our stack.
An alternative is to use JavaScript for the composition. This opens up the possibility of running the same JS code
- on the client, in [service workers](http://www.w3.org/TR/2015/WD-service-workers-20150625/) (essentially caching HTTP proxies running in the browser), or
- on the server, behind edge caches, in a JS runtime like nodejs with an implementation of the service workers API, processing cache misses and authenticated first-page views.
By using JavaScript, we get to use familiar and mature JS templating systems with pre-compilation support, which simplifies the development and testing process. While Varnish performance drops significantly with each ESI include (we measured 50% with five includes), pre-compiled JS templates can potentially perform fairly fine-grained customizations with moderate overhead.
In [browsers that support it](https://jakearchibald.github.io/isserviceworkerready/) (like current Chrome, [about 40% of the market](http://caniuse.com/#search=service%20workers)), we can preload templates and styles for specific end points and speed up performance by fetching the raw content only. By working as a proxy and producing an HTML string, we also avoid changes to the regular page JavaScript. In contrast to single-page applications, we don't incur routing complexity and heavy first-load penalties.
An interesting possibility is to prototype this in a service worker targeting regular page views (`/wiki/{title}`) only, while letting all other requests fall through to the regular request flow.
## See also
- {T34618}
- [ServiceWorkers and Streams for the win by Jake Archibald](https://jakearchibald.com/2016/streams-ftw/), showing Chrome 50's experimental streaming response composition support using a Wikipedia frontend.
- [Fast and resilient web apps: Tools and techniques - Ilya Grigorik at Google I/O 2016](https://www.youtube.com/watch?v=aqvz5Oqs238&feature=youtu.be&t=24m32s) - 10+% of navigations fail on 2G, and this rate is similar in India & the UK
- [Offline Wikipedia demo by Jake Archibald](https://github.com/jakearchibald/offline-wikipedia)
- [Making Netflix.com Faster](http://techblog.netflix.com/2015/08/making-netflixcom-faster.html): Netflix on its move to a JS-only frontend
- [Using service workers to adapt to network conditions](https://paul.kinlan.me/using-service-worker-server-side-adaption-based-on-network-type/)
- {T101731}
- [Reflections on 10 years of ESI by Mark Nottingham (2011)](https://www.mnot.net/blog/2011/10/21/why_esi_is_still_important_and_how_to_make_it_better), including a good discussion on ESI vs. client-side composition with [Illya Grigorik](https://twitter.com/igrigorik)
- [MDN: Functions / classes available to workers](https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Functions_and_classes_available_to_workers)