Page MenuHomePhabricator

Increase Memory Limit for Scribunto
Closed, DeclinedPublic

Description

From T216664

@Reedy, I've just spotted a conversation on a wiki: that change is breaking some stuff ( displaying "Erreur Lua : not enough memory" at the bottom of some articles), and it is apparently due to templates you've created. I haven't investigated more deeply.

A MW request can use 660MB memory. A Scribunto call only 50MB

My "fix" to the frwiki templates basically sacrificed memory to get much less CPU time... So the older parserfunctions version, could've potentially had access to much more memory to execute

Event Timeline

Change 511061 had a related patch set uploaded (by Reedy; owner: Reedy):
[operations/mediawiki-config@master] Increase memory limit for Scribunto to 100MB

https://gerrit.wikimedia.org/r/511061

Volans added a project: serviceops.

I have no opinion. I don't think the existing 50MB limit was chosen for any specific reason.

ArielGlenn triaged this task as Medium priority.Jun 10 2019, 6:48 AM

Looking at the template in question, the obvious solution is to stop doing that. If that's what it takes to exceed the memory limit, then it's probably set about right. We're talking about a table with 36,000 entries, and there are at least 8 such modules loaded into a typical article. What value is this bringing to the articles?

Looking at the template in question, the obvious solution is to stop doing that. If that's what it takes to exceed the memory limit, then it's probably set about right. We're talking about a table with 36,000 entries, and there are at least 8 such modules loaded into a typical article. What value is this bringing to the articles?

Did you look at the pre-scribunto conversions?

Basically, it's seems to be an answer to "how do we do a data lookup". Not the best, but it worked ("if it ain't broke, don't fix it"), and the original version pre-dates Wikidata. As we all know, Wiki[mp]edians do find some creative solutions for the problems they face

I guess it was fine when they were originally created (https://fr.wikipedia.org/w/index.php?title=Mod%C3%A8le:Donn%C3%A9es_Nbcom2009&action=history goes back to early 2012), predating Wikidata by only a few months, and a lot of the flexibility they have these days, which definitely wasn't there for a few years afterwards.

It would've certainly been more performant to subst: the result, and/or just populate with a bot as necessary, but as it worked for the smaller data sets, it just got replicated across to the larger ones, which apparently worked fine on HHVM, but broke when switching to PHP7. Why they didn't choose a bot solution or a subst: type solution, well, that's a question for the original authors. The datasets don't change, so I guess there is little value for it being dynamic and therefore "automatically updated".

It seems the frwiki community has "solved" the problems (for now) by switching the strings to numbers https://fr.wikipedia.org/w/index.php?title=Module%3ADonn%C3%A9es_Rang2009&type=revision&diff=159380536&oldid=159162400 but I know at Wikimedia-Hackathon-2019 @Trizek-WMF did speak to some frwiki wikidata people who were going to be moving the data to wikidata, because that's where data should really be living, right? And then doing a lookup on a known Q number for a specific P (whichever of the datasets they're wanted) probably makes more sense longer term to make the dataset more widely reuseable.

As such, and the fact they've worked around it for now (seemingly), we can close this as declined

Change 511061 abandoned by Reedy:
Increase memory limit for Scribunto to 100MB

https://gerrit.wikimedia.org/r/511061