Page MenuHomePhabricator

Error for some Lua modules "The time allocated for running scripts has expired"
Closed, ResolvedPublic


For several days, some Lua modules encounter error "The time allocated for running scripts has expired"

Examples: here, here, here

Event Timeline

thiemowmde triaged this task as Lowest priority.Mar 1 2018, 12:33 PM

Scratching the limits is basically the point of these maintenance pages. Whenever something like this is done, it is pretty much guaranteed there will be a few extreme outliers. The first example provided loads 300+ Wikidata items, and for each:

  • Loads all labels in all languages
  • Checks if 13 specific properties are present, and filled in properly

The configuration for this can be seen in

The main code is in It contains a problematic mw.wikibase.getEntityObject that might be worth replacing with more specific methods like getBestStatements or the new getAllStatements (T166056).

As the author of the module in question, I should add that there were no such time issues when the module was written, as the expensive function count was reached well before time limits were reached.

The third page linked in the report uses a test module specifically written to test these limits ( This module only loads the entity object and retrieves its properties and English label. replaces that sandbox with, which replaces the getEntityObject call with a call to getLabelByLang and one getAllStatements call for each property in the "full" model. As you can see, it is significantly worse than the getEntityObject version linked in my last comment.

In fact, was working as of November 2015. Something changed in Wikibase between then and now, more likely closer to now (I only noticed this in the last few weeks).

If Phase 3 (auto-generated lists from Wikidata) is ever to become a reality, this is not promising.

@Ladsgroup can you think of anything that changed that would cause this?

thiemowmde raised the priority of this task from Lowest to Medium.Mar 5 2018, 9:04 AM
thiemowmde moved this task from incoming to needs discussion or investigation on the Wikidata board.
thiemowmde added a project: Performance Issue.

What generally changed since November 2015 is:

The next step in this task should be to recreate an example use case on a local system, and profile the code involved to find bottlenecks. Maybe it's wb_terms again? Quite possible.

I took a look at this and it seems this task is a symptom and not a problem on its own. If it's going down gradually, it means we are hitting scalability issues and there are several ways to handle that. One of them includes optimizing wb_terms table which we are on it but it takes months to become fully useful but if we are returning too many errors like this suddenly, it is a regression bug. Looking at the slow-parse graph in logstash, it tells me that we had an increase around 15 of February which started to decrease around first of March and last week is the lowest in the past 30 days. This drop started exactly when I enabled reading from term_full_entity_id instead of term_entity_id. It helps because we remove an additional where clause from the query (term_entity_type = 'item', etc.).
For historical reasons:

image.png (195×1 px, 27 KB)

Lydia_Pintscher claimed this task.

Working again now. I assume this was connected to the performance hit we encountered and fixed yesterday.

So far so good, some of the pages are working again, and others are working after a purge. I'm still monitoring the script errors category.

Thanks for fixing this!