Page MenuHomePhabricator

[EPIC] Monitor complex pages rendering time on mobile web
Closed, ResolvedPublic

Description

Both MobileFrontend and Minerva skin do complex DOM processing before the page is presented to the user. Includes but not limited to the following

  • MobileFrontend transformations
  • Sections parsing
  • Minerva PageIssues parsing

Some time ago we observed T220751: [Bug] Extreme latency due to JavaScript parsing on DOM-heavy pages which could be easily caught by automated systems. Systems can easily detect big changes in render time (like rendering time went double), and we should aim for that.

Open questions

We develop features gradually, adding bits one by one. It's possible that each commit will only slightly increase render time, but in one month window the render time will double. To avoid Boiling frog, tools should provide us a graph that can present the constant increase of render time, or a tool that compares today render time with the render time saved ~month ago.

How to monitor such things? Possible tools:

How to define "a complex page"?

Acceptance criteria
  • There is a tool/graph that provides us information about changes in rendering time of complex pages

Event Timeline

pmiazga renamed this task from Monitor complex pages rendering time to Monitor complex pages rendering time on mobile web.Apr 30 2019, 11:51 AM
Jdlrobson subscribed.

It's marked "scratch" (not sure what that means) but we have https://grafana.wikimedia.org/d/000000205/mobile-2g which measures back end response time (which includes MobileFormatter.)

Measuring page issues/section parsing sounds a little different and I'm not sure how to do that other than lool at extremes for measuring first interactive.

Defining complex pages seems like looking at the extremes of all these values to me?

See also T204606 which talks about the php transformations and a few propsals on how we can avoid these extremes.

Jdlrobson renamed this task from Monitor complex pages rendering time on mobile web to [EPIC] Monitor complex pages rendering time on mobile web.May 1 2019, 6:15 PM
Jdlrobson moved this task from Incoming to Epics/Goals on the Web-Team-Backlog board.

We have some CPU time spent for the URLs we test today (like spent parsing HTML, style etc), checkout https://grafana.wikimedia.org/d/000000059/webpagereplay-drilldown?orgId=1&var-base=sitespeed_io&var-path=emulatedMobileReplay&var-group=en_m_wikipedia_org&var-page=_wiki_Barack_Obama&var-browser=chrome&var-connectivity=100&var-function=median and scroll down to the CPU section. That is collected on a AWS server, where we slowdown the CPU to try to look more like what it is on a mobile phone. It's not perfect but will work until we can use real phones.

One problem today I think is that we just test on a couple of pages for each wiki. For catching things like T220751 there's two ways: First we need to collect CPU long tasks from Chrome in our RUM data. We can do that when Chrome 81 is released, since then it will be easy for us to enable that. Then we can get long CPU tasks from the users. However that data is hard to understand, since we miss out on what is causing the long task.

The other is to add more synthetic testing where we crawl/walk around the Wikipedia and test different pages as in T235817. That way we can find pages with problems and that way also record much more data (like a devtools log) so its easier for us to understand what goes wrong so we can act on that.

Unless performance team want to evolve this into something else I think we should resolve based on https://phabricator.wikimedia.org/T222163#5816108

Seems out of scope for web...

Jdlrobson claimed this task.

per above