Page MenuHomePhabricator

Reconsider use of RESTBase k-r-v storage for mobileapps
Closed, ResolvedPublic

Description

The mobileapps service is currently using RESTBase's key-rev-value storage to persist materialized representations of content, re-formatted for use in native applications. I suspect that this is at least in part simply because It Was There. However, given the problematic nature of this storage, I think we should re-examine this decision.

The raison d'etre of RESTBase's k-r-v data model and retention was the storage of materialized representations of content in a format suitable for Visual Editor. This materialized representation takes a significant amount of time to generate, and thus must be pre-generated and stored in advance. This is done for every render, of every revision, of every document, across all projects, despite us only ever accessing a tiny subset. Additionally, retention semantics dictate that past records can only be removed after the expiration of a TTL, with a clock that starts only after it was superseded. These semantics are necessary to support concurrent editing. Implementation of these semantics came with a number of trade-offs, not least of which is a not insignificant amplification of storage.

The mobileapps service doesn't seem to require the same elaborate retention needed for editing, and transform generation is on the order of 4 seconds 99P (as opposed to minutes in the Parsoid case). If transform latency were low enough, and/or cache hit rates high enough, then perhaps we should rely on Varnish as we do with other content. And given the expense of pre-generating and storing mobile content for every change, regardless of how little of it is actually accessed, I suspect we could easily justify significant engineering effort in optimizing transform latency.

Event Timeline

Eevans created this task.Aug 2 2018, 9:22 PM
Restricted Application added a subscriber: Aklapper. · View Herald TranscriptAug 2 2018, 9:22 PM
Eevans triaged this task as Medium priority.Aug 2 2018, 9:22 PM

One other reason for using the k-r-v pattern is that mobile content consists of a significant number of chunks - previously those were mobile-sections-lead and mobile-sections-remaining, now with PCS it's mobile-html, metadata, media, references etc. Since they all are fetched lazily in different times, in the perfect world the user would expect to get the version of references that correspond exactly to the render of the content they've been reading, so we also have the grace period when we need older renders here.

However, in practice, matching renders by requesting the non-primary content with specific TID was never implemented in the apps and they, in general, don't seem to care about it, so given the amount of processing power we through on this, it might be just 'good enough' to generate everything on the fly.

Some of the new PCS endpoints are quite slow, so the storage might still be needed for these, but even going away from k-r-v pattern and treating cassandra as a simple key-value store might provide us significant savings on the storage side. We would still be wasting significant resources on MCS side though, so making MCS much faster indeed seems like a good investment.

Joe added a subscriber: Joe.Aug 3 2018, 2:47 PM

I have a few comments on this topic. Specifically:

  • If you define your storage as a way to store a materialized representation, however expensive, what you really need is a cache, not a storage system
  • In general, HTTP response caching should be handled either by the edge network (varnish/ATS), while application-level caching should be handled by the application itself and not by any upper layer.
  • Also in the context of caching, object-level caching is better handled by the application itself, and might probably be much more efficient and make individual responses be less expensive.

In any case, this assessment should start with finding out the following data:

  • What cache hit ratio we have at the varnish layer for MCS-related entities
  • What cache hit ratio we have at the restbase layer for MCS-related entities

and if possible at all disable any caching in Restbase for this application, and others in the same sitiuation.

Eevans added a comment.Aug 3 2018, 2:52 PM

[ ... ]

  • What cache hit ratio we have at the restbase layer for MCS-related entities

.
Assuming I understand you correctly, the answer is essentially 100% here, because we pre-generate everything.

Joe added a comment.Aug 3 2018, 3:01 PM

[ ... ]

  • What cache hit ratio we have at the restbase layer for MCS-related entities

.
Assuming I understand you correctly, the answer is essentially 100% here, because we pre-generate everything.

Right, that model is clearly not tenable on the long run, either. The next question would be then what percentage of the objects stored in restbase ever get re-read before they expire/are regenrated.

Eevans added a comment.Aug 3 2018, 3:33 PM

If I am reading https://grafana.wikimedia.org/dashboard/db/mobileapps correctly, then it's even worse than I originally suspected. Only page_summary_-title- is in the black (at just 4:1).

I would second the idea of switching the MCS' storage to key-value, at least in the short term, in this way reducing the storage capacity needs.

If I am reading https://grafana.wikimedia.org/dashboard/db/mobileapps correctly, then it's even worse than I originally suspected. Only page_summary_-title- is in the black (at just 4:1).

Both summary and mobile-sections have high p99 latencies because they both take the full Parsoid HTML as a starting point for their transformations. A case for pregenerating and storing all of MCS content in the long run could also be made when taking into account optimisations that could be done to MCS. For example, if it were possible to compile a page's summary out of the generated mobile-html or a intermediary, stripped-down version of the HTML, then p99 latencies would significantly drop overall, but would solidify the need to store all of the content.

bearND added a subscriber: bearND.Aug 4 2018, 12:26 AM

Right, that model is clearly not tenable on the long run, either. The next question would be then what percentage of the objects stored in restbase ever get re-read before they expire/are regenerated.

Here's my back of the napkin math that probably has absolutely nothing to do with reality cause the numbers might greatly vary depending on the actual distribution of the articles being read vs edited

CFStats for all the stored articles:

  • commons_T_mobile__ng_lead
    • Read Count: 637
    • Write Count: 39524
    • Number of partitions (estimate): 128041
  • others_T_mobile__ng_lead
    • Read Count: 1214
    • Write Count: 204044
    • Number of partitions (estimate): 201642
  • wikipedia_T_mobile__ng_lead
    • Read Count: 1831380
    • Write Count: 9565501
    • Number of partitions (estimate): 9334424
  • enwiki_T_mobile__ng_lead
    • Read Count: 1195555
    • Write Count: 3795435
    • Number of partitions (estimate): 2772486

So, it's pretty obvious that for 'commons' and 'others' we are clearly just melting the icecaps, but actually the fact they are being rerendered is a bug - normal edit-related rerendering is only for Wikipedia, here I guess we're reacting to null edits and mediawiki purges - needs to be fixed and the mobile-sections endpoints has to be removed for everything other then wikipedia.

As for wikipedias, total read count is roughly 3 million reads vs total write count of 13 million writes - that's CFStats numbers, so it's since the startup of the particular node - the actual numbers are fairly useless, what's interesting is the ratio between reads and writes. The estimated number of partitions is about 12 million and, if we assume that writes and reads are evenly distributed across all the titles (WILDLY NOT PRECISE ASSUMPTION), then 3 million out of 13 million are actually read, so the "hit ratio" is 23%

A little bit more data: according to webrequest logs mobile-sections-lead was requested 4.808.026 times on 2018/08/03. According to RESTBase graphs, on the same day the average rate of requests reaching RESTbase was 15/s, which gives us 1.296.000 Varnish cache-misses giving the hit ratio of 0.73

Given that p95 latency for MCS generating the mobile-sections from scratch is between 500 ms and 1 second, only 5% of the requests not served by varnish with result in client-side latency, so, 64500 reqs per day giving us the SLA that 98% of the overall requests will be served way within 1 second latency. Given that for mobile clients the network delay is probably way more important in driving up the overall page load latency, I believe in might be a fair tradeoff to stop pre-generating mobile content.

Although, these calculations are try for mobile-sections that will likely be replaced with mobile-html very soon, and the latency calculations for the new endpoint will probably be different.

So, it's pretty obvious that for 'commons' and 'others' we are clearly just melting the icecaps, but actually the fact they are being rerendered is a bug - normal edit-related rerendering is only for Wikipedia, here I guess we're reacting to null edits and mediawiki purges - needs to be fixed and the mobile-sections endpoints has to be removed for everything other then wikipedia.

I agree. We should stop exposing mobile-sections for everything other than WP and stop rendering them from CP.

As for wikipedias, total read count is roughly 3 million reads vs total write count of 13 million writes - that's CFStats numbers, so it's since the startup of the particular node - the actual numbers are fairly useless, what's interesting is the ratio between reads and writes. The estimated number of partitions is about 12 million and, if we assume that writes and reads are evenly distributed across all the titles (WILDLY NOT PRECISE ASSUMPTION), then 3 million out of 13 million are actually read, so the "hit ratio" is 23%

Well, that is probably far from the truth. If you take into account the fact that writes are driven by template updates and reads are not, we have two very distinct sets which don't necessarily overlap too much.

A little bit more data: according to webrequest logs mobile-sections-lead was requested 4.808.026 times on 2018/08/03. According to RESTBase graphs, on the same day the average rate of requests reaching RESTbase was 15/s, which gives us 1.296.000 Varnish cache-misses giving the hit ratio of 0.73

I think this data point works in favour of keeping the pre-generation going, since RESTBase obviously alleviates much of the load from MCS and allows for a quick delivery of content to clients (which is RB's base benefit).

Given that p95 latency for MCS generating the mobile-sections from scratch is between 500 ms and 1 second, only 5% of the requests not served by varnish with result in client-side latency, so, 64500 reqs per day giving us the SLA that 98% of the overall requests will be served way within 1 second latency. Given that for mobile clients the network delay is probably way more important in driving up the overall page load latency, I believe in might be a fair tradeoff to stop pre-generating mobile content.

The exact impact of the latency stemming from content generation is really unknown given that mobile connections vary significantly throughout the world; in some cases it is noticeable, in some it is not. While a 98% SLA might sound good on paper, we have to keep in mind two things:

  • the ratio of mobile vs desktop usage is constantly growing, so what today constitutes 64k requests can easily become several hunder thousands of requests in the near future
  • specifically for moblie-sections (but not for moblie-html, AFAIK), if the lead section takes over 1s to render, the remaining sections will take the same amount of time, meaning that the same user will experience slow load times twice per article
Eevans added a comment.EditedAug 7 2018, 4:05 PM

[ ... ]

  • the ratio of mobile vs desktop usage is constantly growing, so what today constitutes 64k requests can easily become several hundred thousands of requests in the near future

Is this for the native apps, or for the mobile web version (i.e. are they growing at the same rate)? Which are the right dashboards for this?

Is this for the native apps, or for the mobile web version (i.e. are they growing at the same rate)? Which are the right dashboards for this?

Mobile web is the big driver, ofc, but given that the long term plan is to have serve all mobile clients with the same HTML/end point that is of less relevance. FTR, mobile apps usage is also growing, but more slowly so.

Eevans added a comment.EditedAug 7 2018, 7:15 PM

Here is an attempt at summarizing the discussion so far (please chime in if any of this is wrong):

  • We may, or may not, require matching sections (the reason for using k-r-v)
  • MCS p95 latency is 1s
    • PCS latency may be higher (slated to eventually replace MCS)
  • We pre-generate and store everything, but read less than a 1/4 of it (based on Cassandra r/w ratio)
  • Varnish hit rate is ~73%

Suggestions thus far:

  • Do not pre-generate (and store), rely on Varnish for caching
    • ...and optimize generation to lower cache miss latencies
  • Pre-generate and store selectively, rely solely on Varnish for caching otherwise
    • ...and optimize generation to lower cache miss latencies
  • Use a different RESTBase/Cassandra storage strategy
  • Have MCS manage its own object cache
Eevans added a comment.Aug 7 2018, 7:19 PM

A little bit more data: according to webrequest logs mobile-sections-lead was requested 4.808.026 times on 2018/08/03. According to RESTBase graphs, on the same day the average rate of requests reaching RESTbase was 15/s, which gives us 1.296.000 Varnish cache-misses giving the hit ratio of 0.73

I think this data point works in favour of keeping the pre-generation going, since RESTBase obviously alleviates much of the load from MCS and allows for a quick delivery of content to clients (which is RB's base benefit).

Just to be clear, if I understand this correctly, we're saying that Varnish has a hit rate of 73%. That almost read to me the first time as the other way around.

  • Have MCS manage its own object cache

I think this is the best long-term solution, but for now IMHO what we should do is a combination of:

  • Pre-generate and store selectively, rely solely on Varnish for caching otherwise
    • ...and optimize generation to lower cache miss latencies
  • Use a different RESTBase/Cassandra storage strategy

... where selective storing in my mind means store and pre-generate only for WP and drop everything else. The question is: in light of mobile-html, can we do that? I.e., is mobile-html optimised for WP only, just as mobile-sections are?

Just to be clear, if I understand this correctly, we're saying that Varnish has a hit rate of 73%. That almost read to me the first time as the other way around.

Duh, thank you for pointing out the obvious. Somehow I originally read Petr's comment the other way round - that 73% of requests reach RESTBase, but now I realise that's not the case.

Duh, thank you for pointing out the obvious. Somehow I originally read Petr's comment the other way round - that 73% of requests reach RESTBase, but now I realise that's not the case.

Ups, my bad :)

I.e., is mobile-html optimised for WP only, just as mobile-sections are?

According to @bearND the mobile-html is a direct replacement for mobile-sections, so created for apps. Although there are long-long term plans to use it for mobile web, this is not going to happen at within this fiscal year. When those plans get a bit more materialized, we can reconsider and order hardware accordingly.

To summarize, one thing everyone agrees upon right now is to remove pre-generation and mobile-sections endpoints from everything except wikipedias - I will do that as a start.

According to @bearND the mobile-html is a direct replacement for mobile-sections, so created for apps. Although there are long-long term plans to use it for mobile web, this is not going to happen at within this fiscal year. When those plans get a bit more materialized, we can reconsider and order hardware accordingly.

Agreed. Let's go with what we have for this year at least and revisit when/if needed.

To summarize, one thing everyone agrees upon right now is to remove pre-generation and mobile-sections endpoints from everything except wikipedias - I will do that as a start.

Great, thank you! As soon as the end points are gone, we can also truncate the corresponding tables.

Change 451375 had a related patch set uploaded (by Ppchelko; owner: Ppchelko):
[mediawiki/services/change-propagation/deploy@master] Only rerender mobile-sections for wikipedia.

https://gerrit.wikimedia.org/r/451375

Change 451375 merged by Mobrovac:
[mediawiki/services/change-propagation/deploy@master] Only rerender mobile-sections for wikipedia.

https://gerrit.wikimedia.org/r/451375

Mentioned in SAL (#wikimedia-operations) [2018-08-08T18:22:25Z] <ppchelko@deploy1001> Started deploy [changeprop/deploy@f0246f7]: Only rerender mobile-sections for wikipedia T201103

Mentioned in SAL (#wikimedia-operations) [2018-08-08T18:23:51Z] <ppchelko@deploy1001> Finished deploy [changeprop/deploy@f0246f7]: Only rerender mobile-sections for wikipedia T201103 (duration: 01m 29s)

Change 451523 had a related patch set uploaded (by Ppchelko; owner: Ppchelko):
[mediawiki/services/restbase/deploy@master] Remove mobile endpoints from non-wikipedia projects.

https://gerrit.wikimedia.org/r/451523

Change 451523 merged by Mobrovac:
[mediawiki/services/restbase/deploy@master] Remove mobile endpoints from non-wikipedia projects.

https://gerrit.wikimedia.org/r/451523

Mentioned in SAL (#wikimedia-operations) [2018-08-09T09:50:42Z] <mobrovac@deploy1001> Started deploy [restbase/deploy@cb6b4b4]: Drop mobile-sections, feed and media end points from non-WPs - T201103

Mentioned in SAL (#wikimedia-operations) [2018-08-09T09:56:55Z] <mobrovac@deploy1001> deploy aborted: Drop mobile-sections, feed and media end points from non-WPs - T201103 (duration: 06m 14s)

Mentioned in SAL (#wikimedia-operations) [2018-08-09T09:57:44Z] <mobrovac@deploy1001> Started deploy [restbase/deploy@cb6b4b4]: Drop mobile-sections, feed and media end points from non-WPs - T201103

Mentioned in SAL (#wikimedia-operations) [2018-08-09T10:02:10Z] <mobrovac@deploy1001> Finished deploy [restbase/deploy@cb6b4b4]: Drop mobile-sections, feed and media end points from non-WPs - T201103 (duration: 04m 26s)

Change 451604 had a related patch set uploaded (by Mobrovac; owner: Mobrovac):
[mediawiki/services/restbase/deploy@master] Config: Fix wp.org project key name

https://gerrit.wikimedia.org/r/451604

Change 451604 merged by Mobrovac:
[mediawiki/services/restbase/deploy@master] Config: Fix wp.org project key name

https://gerrit.wikimedia.org/r/451604

Mentioned in SAL (#wikimedia-operations) [2018-08-09T10:05:56Z] <mobrovac@deploy1001> Started deploy [restbase/deploy@ece750a]: Drop mobile-sections, feed and media end points from non-WPs, take #2 - T201103

Mentioned in SAL (#wikimedia-operations) [2018-08-09T10:14:34Z] <mobrovac@deploy1001> Finished deploy [restbase/deploy@ece750a]: Drop mobile-sections, feed and media end points from non-WPs, take #2 - T201103 (duration: 08m 38s)

Mentioned in SAL (#wikimedia-operations) [2018-08-09T10:14:46Z] <mobrovac@deploy1001> Started deploy [restbase/deploy@ece750a]: Drop mobile-sections, feed and media end points from non-WPs, take #3 - T201103

Mentioned in SAL (#wikimedia-operations) [2018-08-09T10:19:06Z] <mobrovac@deploy1001> Finished deploy [restbase/deploy@ece750a]: Drop mobile-sections, feed and media end points from non-WPs, take #3 - T201103 (duration: 04m 20s)

Mentioned in SAL (#wikimedia-operations) [2018-08-09T10:19:13Z] <mobrovac@deploy1001> Started deploy [restbase/deploy@ece750a]: Drop mobile-sections, feed and media end points from non-WPs, take #4 - T201103

Mentioned in SAL (#wikimedia-operations) [2018-08-09T10:31:23Z] <mobrovac@deploy1001> Finished deploy [restbase/deploy@ece750a]: Drop mobile-sections, feed and media end points from non-WPs, take #4 - T201103 (duration: 12m 11s)

Mentioned in SAL (#wikimedia-operations) [2018-08-09T10:31:34Z] <mobrovac@deploy1001> Started deploy [restbase/deploy@ece750a]: Drop mobile-sections, feed and media end points from non-WPs, take #5 - T201103

Mentioned in SAL (#wikimedia-operations) [2018-08-09T10:38:02Z] <mobrovac@deploy1001> Finished deploy [restbase/deploy@ece750a]: Drop mobile-sections, feed and media end points from non-WPs, take #5 - T201103 (duration: 06m 27s)

Mentioned in SAL (#wikimedia-operations) [2018-08-09T10:57:55Z] <mobrovac> truncating commons and others mobile tables - T201103

Current status:

  • pre-generation for mobile end points is enabled only for WPs
  • the mobile end points have been removed from the public API for non-WP projects
  • the data tables for the others and commons storage groups have been truncated~[1]

The question now is: should we simplify the storage further and move away from the k-r-v multi-content bucket for mobile sections for WPs? I would say yes, IMHO a simpler solution is better in this case any way.

[1] For posterity, here's the list of keyspaces that got their data tables truncated:

commons_T_mobile__ng3HeqOXXmkYfPizz4RPUR4OLXLds
commons_T_mobile__ng_lead
commons_T_mobile__ngR6XB1sh6_FFo_mfX4oZA56vpD_w
commons_T_mobile__ng_remaining
others_T_mobile__ng3HeqOXXmkYfPizz4RPUR4OLXLds
others_T_mobile__ng_lead
others_T_mobile__ngR6XB1sh6_FFo_mfX4oZA56vpD_w
others_T_mobile__ng_remaining

We can probably drop them too

bearND added a comment.Aug 9 2018, 3:42 PM

Current status:

  • pre-generation for mobile end points is enabled only for WPs

When you say mobile endpoints you mean the mobile-sections* endpoints, right? I hope that others, like the definitions endpoint on Wiktionary, still stay the same.

When you say mobile endpoints you mean the mobile-sections* endpoints, right? I hope that others, like the definitions endpoint on Wiktionary, still stay the same.

mobile-sections, media and feed

bearND added a comment.Aug 9 2018, 5:08 PM

I thought we were not storing media and feed before anyways, as was mentioned in the Platform/Audiences sync a few minutes ago.

I thought we were not storing media and feed before anyways, as was mentioned in the Platform/Audiences sync a few minutes ago.

Ah yes, sorry, they were removed from the public API of non-WP projects, but only mobile-sections was being stored/pre-generated.

bearND added a comment.Aug 9 2018, 5:27 PM

Makes sense now. Thanks!

Pchelolo closed this task as Resolved.Jul 10 2019, 7:23 PM
Pchelolo claimed this task.

This has been done a while ago.