Page MenuHomePhabricator

Analyze the win/loss of stop combining assets with HTTP/2
Closed, DeclinedPublic



We have made the switch to SPDY and when HTTP/2 is available for more browsers will start using that too. One of the pro:s of using HTTP/2 is that combining/concatenating assets is not needed anymore (the browser can download many assets from the same domain at the same time). And that will open the possibility for all small CSS/JS file to have individual cache times and in best case, only re-cache them when they are changed.


There's a lot that needs to be done and analyzed to make that happened but first start look at this example where we access the Wikipedia Facebook page first with an empty cache and then pre-populated with the Main_Page.

Empty cache:

Screen Shot 2015-11-04 at 12.28.02 PM.png (226×2 px, 75 KB)

Pre populated:

Screen Shot 2015-11-04 at 12.28.16 PM.png (230×2 px, 66 KB)

Already today we win in bytes and less Javascript that's great. But we still make the same amount of requests. And could the win be even higher if we split the assets to individual files?


  • By stop concatenating assets, will we kill the performance for browsers using HTTP1.1? How many users will be affected? And what will the effect be? Is there away way to minimize the loss?
  • What kind of positive effect will we see (speed, bytes, requests)
  • Is the Main_Page test a good example or should we pre-populate the cache by another page?
  • How much work is it in the backend to change the resource loader and what needs to be done to be able to cache assets longer?

Related research:

Event Timeline

Peter raised the priority of this task from to Needs Triage.
Peter updated the task description. (Show Details)
Peter added a subscriber: Peter.

This is interesting about combining assets gives better compress ratio:

But we still make the same amount of requests.

Total requests


107 KB

That seems odd.

  • Maybe this includes data URIs?
  • Maybe this includes local cache hits that did not touch the network (e.g. not networked http-304 but local cache)
  • Maybe this includes local body content for 304 responses? (e.g. the 304 roundtrip may be included, but the body content should not because it didn't really go over the network, but came from cache).

Open Chrome Incognito and view

JavaScriptCSSImageHTMLTotalTotal requests
185 KB transferred17 KB transferred357 KB transferred18 KB577 KB transferred36 requests

Open Chrome Incognito and view then

JavaScriptCSSImageHTMLTotalTotal requests
0 KB transferred12 KB transferred300 KB transferred18 KB330 KB transferred26 requests

2 JavaScript resources:

  • startup (from cache)
  • jquery|mediawiki (from cache)
  • Other modules from localStorage.

2 CSS resources:

  • top queue (200 OK; 12K, no cache because the queue varies from Facebook)
  • site (from cache)

Hmm, yes let me dig into the waterfalls and see whats wrong.

On WPT we do it the other way around, but it shouldn't matter, it looks like this:

It reports 200 on the Javascript URLs, let me create an issue on Github.

Krinkle triaged this task as High priority.Dec 14 2015, 8:02 PM
Krinkle moved this task from Inbox to Backlog on the MediaWiki-ResourceLoader board.

Ok, spent some time on this. Got some help from Pat and tested the combine steps function in WebPageTest, where you can run multiple navigations and get the data in the same waterfall:


Here we actually get request for JS in the second step:

When I test it local in Chrome I get the same thing (but thought I verified the issue before I filled an issue at Github).

There's been a fair amount of broader understanding of HTTP2 downsides in the tech industry over the past months. Priority handling in upstream Nginx/Chrome, for example, has improved since then. General awareness of these issues has increased. Beyond that, there's not much we can do about it, since we don't plan on switching back either way.