Page MenuHomePhabricator

Analyze the win/loss of stop combining assets with HTTP/2
Closed, DeclinedPublic



We have made the switch to SPDY and when HTTP/2 is available for more browsers will start using that too. One of the pro:s of using HTTP/2 is that combining/concatenating assets is not needed anymore (the browser can download many assets from the same domain at the same time). And that will open the possibility for all small CSS/JS file to have individual cache times and in best case, only re-cache them when they are changed.


There's a lot that needs to be done and analyzed to make that happened but first start look at this example where we access the Wikipedia Facebook page first with an empty cache and then pre-populated with the Main_Page.

Empty cache:

Pre populated:

Already today we win in bytes and less Javascript that's great. But we still make the same amount of requests. And could the win be even higher if we split the assets to individual files?


  • By stop concatenating assets, will we kill the performance for browsers using HTTP1.1? How many users will be affected? And what will the effect be? Is there away way to minimize the loss?
  • What kind of positive effect will we see (speed, bytes, requests)
  • Is the Main_Page test a good example or should we pre-populate the cache by another page?
  • How much work is it in the backend to change the resource loader and what needs to be done to be able to cache assets longer?

Related research:

Event Timeline

Peter created this task.Nov 4 2015, 11:53 AM
Peter raised the priority of this task from to Needs Triage.
Peter updated the task description. (Show Details)
Peter added a subscriber: Peter.
Restricted Application added subscribers: StudiesWorld, Aklapper. · View Herald TranscriptNov 4 2015, 11:53 AM
Krenair added a project: Performance-Team.
Krenair added a subscriber: Krenair.
Peter added a comment.Nov 25 2015, 8:09 AM

This is interesting about combining assets gives better compress ratio:

Krinkle set Security to None.
Krinkle added a subscriber: Krinkle.Dec 4 2015, 6:34 PM

But we still make the same amount of requests.

Total requests


107 KB

That seems odd.

  • Maybe this includes data URIs?
  • Maybe this includes local cache hits that did not touch the network (e.g. not networked http-304 but local cache)
  • Maybe this includes local body content for 304 responses? (e.g. the 304 roundtrip may be included, but the body content should not because it didn't really go over the network, but came from cache).

Open Chrome Incognito and view

JavaScriptCSSImageHTMLTotalTotal requests
185 KB transferred17 KB transferred357 KB transferred18 KB577 KB transferred36 requests

Open Chrome Incognito and view then

JavaScriptCSSImageHTMLTotalTotal requests
0 KB transferred12 KB transferred300 KB transferred18 KB330 KB transferred26 requests

2 JavaScript resources:

  • startup (from cache)
  • jquery|mediawiki (from cache)
  • Other modules from localStorage.

2 CSS resources:

  • top queue (200 OK; 12K, no cache because the queue varies from Facebook)
  • site (from cache)
Krinkle updated the task description. (Show Details)Dec 6 2015, 2:15 PM
Peter added a comment.Dec 7 2015, 6:19 AM

Hmm, yes let me dig into the waterfalls and see whats wrong.

Peter added a comment.Dec 7 2015, 6:35 AM

On WPT we do it the other way around, but it shouldn't matter, it looks like this:

It reports 200 on the Javascript URLs, let me create an issue on Github.

Krinkle triaged this task as High priority.Dec 14 2015, 8:02 PM
Krinkle moved this task from Inbox to Backlog on the MediaWiki-ResourceLoader board.
Peter added a comment.Dec 21 2015, 9:21 AM

Ok, spent some time on this. Got some help from Pat and tested the combine steps function in WebPageTest, where you can run multiple navigations and get the data in the same waterfall:


Here we actually get request for JS in the second step:

When I test it local in Chrome I get the same thing (but thought I verified the issue before I filled an issue at Github).

jayvdb added a subscriber: jayvdb.Apr 21 2016, 6:50 PM
Krinkle closed this task as Declined.Dec 6 2016, 12:19 AM

There's been a fair amount of broader understanding of HTTP2 downsides in the tech industry over the past months. Priority handling in upstream Nginx/Chrome, for example, has improved since then. General awareness of these issues has increased. Beyond that, there's not much we can do about it, since we don't plan on switching back either way.