Pages display Lua error in mw.wikibase.entity.lua
Open, Unbreak Now!Public

Description

An intermittent problem occurs when a Lua module executes entity = mw.wikibase.getEntity() or its equivalent entity = mw.wikibase.getEntityObject().

The symptom is that a rendered page shows a big red error message saying:

Lua error in mw.wikibase.entity.lua at line 34: The entity data must be a table obtained via mw.wikibase.getEntityObject

and the page is added to Category:Pages_with_script_errors (or local equivalent).

The problem goes away when the page is purged so a link to a demonstration only works for a limited period. Currently, these pages show the error:

Using Special:Search for mw.wikibase.getEntityObject shows lots of cached examples:

Google search also shows examples. For example, search for one of the following then view Google's cache:

"error in mw.wikibase.entity.lua" site:commons.wikimedia.org
"error in mw.wikibase.entity.lua" site:en.wikipedia.org
"error in mw.wikibase.entity.lua" site:zh.wikipedia.org

The problem occurs when a Lua module uses Wikidata. That calls mw.wikibase.getEntity which calls php.getEntityId. I guess that sometimes times out when it tries to establish a network connection with the Wikidata database.

There was a discussion at enwiki.

There are a very large number of changes, so older changes are hidden. Show Older Changes

Interestingly one of the examples today seems to coincide with an edit, a bot edit. The article is Bergamo and no doubt it will be promptly purged but the date and time in in the page list by date/time exactly matches the last edit by InternetArchiveBot. I think this is just coincidence though; all other pages I have looked at there is no edit to the page or to e.g. a template that caused it when I have checked.

Vachovec1 added a comment.EditedJul 18 2017, 2:57 PM

An observation: today I got manually through the Category: Pages with script errors equivalent at cs-wikipedia (https://cs.wikipedia.org/wiki/Kategorie:%C3%9Adr%C5%BEba:Str%C3%A1nky_s_chybami_skript%C5%AF) – about 60 pages. A lot of false positives (all purged), three real cases of this error (one purged /important page/, two remaining : https://cs.wikipedia.org/wiki/Litold_Znojemsk%C3%BD and https://cs.wikipedia.org/wiki/PlayStation_3, if someone is able and willing to catch and post the cache version info here). But I found that there are pages with this error which are NOT displayed inside the category. Examples: https://cs.wikipedia.org/wiki/Svat%C3%A1_Lucie_(osoba), https://cs.wikipedia.org/wiki/B%C3%ADlsko_u_Ho%C5%99ic. Some of these can be found via search (https://cs.wikipedia.org/w/index.php?search=mw.wikibase.entity.lua&title=Speci%C3%A1ln%C3%AD:Hled%C3%A1n%C3%AD&go=J%C3%ADt+na&searchToken=4gqaprg0qyc783z3awbaaa7yn), but there are false positives too.

Change 366221 had a related patch set uploaded (by Thiemo Mättig (WMDE); owner: Thiemo Mättig (WMDE)):
[mediawiki/extensions/Wikibase@master] More expressive mw.wikibase.entity.create() error messages

https://gerrit.wikimedia.org/r/366221

thiemowmde moved this task from Proposed to Review on the Wikidata-Sprint board.Jul 19 2017, 9:54 AM
Larske added a comment.EditedJul 19 2017, 10:17 AM

Here are my 2 cents to this:
This morning I noticed that there were between 8,400 and 8,500 pages in the Pages with script errors category on svwp. I made "null-edit" to all of them bringing the category size down to below 20 pages. But the category keeps being filled again at a slow but steady pace. Right now there is more than 130 pages.
Most of the pages do neither show the category in the footer nor display the fat red Lua error message in the text, but still they are put in the category.

Here are two examples of very similar articles with almost the same size and using the same template. The articles are not edited since March 16, 2016 and the error message is shown in one of the articles but not in the other. Both articles are however now, until someone null-edits them, present in the category.

  1. [[:sv:Toolgana Rockhole]] (does not show the error message or the category in the footer) link: https://sv.wikipedia.org/wiki/Toolgana_Rockhole
  2. [[:sv:Toothagoona Rockhole]] (does show the error message and the category in the footer) link: https://sv.wikipedia.org/wiki/Toothagoona_Rockhole

Category: [[:sv:Kategori:Sidor med skriptfel]] link: https://sv.wikipedia.org/wiki/Kategori:Sidor_med_skriptfel

This leads me to suspect some "timing error" that shows up in an undeterministic way.

Hope this can be of some use in the trouble shooting.

What you describe is in my experience how the category works in general. Before this issue the category (on en.wp) had far fewer articles, and I would from time to time go through them and check them. Some I could fix and they would immediately be removed from the category. But others even if fixed (whether by me or already by someone else) were not removed from the category, taking hours, even a day, to be removed.

One example is a temporarily broken template. Someone making an edit to a template which breaks it, then quickly reverting or fixing a problem as they notice it. Sometimes this would result in pages being put in the category and still being in it several hours later.

So I would not read anything into the pages not being removed from the category even after being 'fixed', It is just how that category works.

Change 366221 merged by jenkins-bot:
[mediawiki/extensions/Wikibase@master] More expressive mw.wikibase.entity.create() error messages

https://gerrit.wikimedia.org/r/366221

hoo added subscribers: Anomie, hoo.Jul 20 2017, 9:05 AM

I looked into this for a bit and can't see any apparent root cause.

Maybe related: T166348 (but I don't understand it enough to judge this, right now).

@hoo so you think it's a Lua bug?

I hope I8e78a97a0a04 will give us a bit more information.

hoo added a comment.Jul 20 2017, 12:55 PM

@hoo so you think it's a Lua bug?

Well, it could be a bug in Wikibase? PHP code which is surfacing in the code that binds to the Lua runtime within Scribunto. But I don't really have an idea.

The errors mentioned in the ticket also seem to correlate to this problem (based on the parser cache timestamps), but they are so frequent that this might not mean anything.

@hoo so you think it's a Lua bug?

The errors mentioned in the ticket also seem to correlate to this problem (based on the parser cache timestamps), but they are so frequent that this might not mean anything.

It might indeed be an effect of T166348, I suspected that but haven't had time to try to establish a correlation between log messages and instances of this problem showing up.

If it is the case, then once T166348 is resolved then new instances of this problem should stop appearing. I've filed T171166: Build and push a new hhvm-luasandbox package to specifically track deploying that fix.

Change 366810 had a related patch set uploaded (by Thiemo Mättig (WMDE); owner: Thiemo Mättig (WMDE)):
[mediawiki/extensions/Wikibase@wmf/1.30.0-wmf.10] More expressive mw.wikibase.entity.create() error messages

https://gerrit.wikimedia.org/r/366810

Screen shot of today's error with time stamp

Restricted Application added a subscriber: PokestarFan. · View Herald TranscriptSun, Jul 23, 8:51 AM
Vachovec1 added a comment.EditedSun, Jul 23, 9:28 AM

I don't think this is primarily a Lua bug. I think Larske in T170039#3451753 has something with his "timing error" hypothesis. It really looks like this:

  1. Page is loading, but "something" is timing out (or "some" process/job fails)
  2. Lua errors are generated due to 1) - as a result of bad communication between loaded page and Wikidata (values are not passed or they are passed unprocessed which then causes the errors)
  3. "bad" version of the page is cached

That of course opens some questions:

What is the trigger for caching? Just a page view? Is it possible? But it can't happen by every page view, because the cached version can persist for days through repetitive page views.
Why exactly the above mentioned error message? Is it some "default" response when malformed data are passed? In that case it could be just a red herring.

See T171392#3463575 for another interesting occurence (while debugging something else).

IKhitron added a comment.EditedSun, Jul 23, 8:41 PM

Hi. Many people spoke about the script errors category. I'm checking this category on hewiki every day at least once. In the last weeks every day we have 50-300 false positives. I run js nulledit script, and even then there are a couple of new pages. So, I need to run 2-3 times to get an empty category, and so day after day. But, the important part: every day the number of false positives continuously grows!

hoo added a comment.Sun, Jul 23, 10:08 PM

Due to the weekend the work on this is currently stalled, but we will continue working on this early next week, please bear with us.

We have a patch ready to deploy which will make it more obvious what exactly the problem is here, also there are good chances of T171166: Build and push a new hhvm-luasandbox package fixing this issue (which is also going to be deploy early next week).

Change 366810 merged by jenkins-bot:
[mediawiki/extensions/Wikibase@wmf/1.30.0-wmf.10] More expressive mw.wikibase.entity.create() error messages

https://gerrit.wikimedia.org/r/366810

@Vachovec1: Desynchronization between the displayed version and the category membership is easily explained: not every parse of the page updates the links tables. If you have certain user preferences set, you see the page parsed according to your preferences but that result does not necessarily correspond to the state of the links tables. Further, the web UI action=purge doesn't update the links tables either (which is why null editing is a "stronger" version of purging).

@Vachovec1: Desynchronization between the displayed version and the category membership is easily explained: not every parse of the page updates the links tables. If you have certain user preferences set, you see the page parsed according to your preferences but that result does not necessarily correspond to the state of the links tables. Further, the web UI action=purge doesn't update the links tables either (which is why null editing is a "stronger" version of purging).

just to add API POV: when using API for purge, it is important to use forcelinkupdate to get the "strong" purge, in our case, one can purge 50 pages in the category using 1 API call:

		new mw.Api().post({
			action: 'purge', 
			generator: 'categorymembers',
  			gcmtitle: 'Pages_with_script_errors',
  			gcmlimit: 50,
		       forcelinkupdate: 1
		})

Better 30, you don't know who will run it.

@Vachovec1: Desynchronization between the displayed version and the category membership is easily explained: not every parse of the page updates the links tables. If you have certain user preferences set, you see the page parsed according to your preferences but that result does not necessarily correspond to the state of the links tables. Further, the web UI action=purge doesn't update the links tables either (which is why null editing is a "stronger" version of purging).

I thought something like that. So, that means that the "trigger" could be a casual page view, when the links tables are updated wrong way?

I would point to one thing which is particulary interesting: if I remember correctly, when this error occurs, every Wikidata entry on the page is affected and every time the same Lua error message is displayed. That's why i think that the actual bug may not be hidden in Lua scripts, but rather somewhere in parsing/caching process.

JohnBlackburne added a comment.EditedTue, Jul 25, 9:21 AM

One of the new error messages at the bottom of en:Higgs Boson as I type:

Lua error in mw.wikibase.entity.lua at line 37: data.schemaVersion must be a number, got nil instead.

For the second time the en.wp category has shrunk dramatically: yesterday there were hundreds of articles in it, now there are only 16.

also en:Uranium and en: Jason Lezak have/had the same error. The latter has another related error in place of the Official Website template, despite the data being on Wikidata:

No URL found. Please specify a URL here or add one to Wikidata.

One of the new error messages at the bottom of en:Higgs Boson as I type:

Lua error in mw.wikibase.entity.lua at line 37: data.schemaVersion must be a number, got nil instead.

That could be result of an another bug. At cs-wiki, we had a page which showed Lua: not enough memory error. When purged, the error temporarily changed to Lua error in mw.wikibase.entity.lua at line 34, but then we were back to the old error. We found that the cause has had nothing to do with this bug, but that there was a real "Lua memory overload" due to badly formatted Wikidata call.

For the second time the category has shrunk dramatically: yesterday there were hundreds of articles in it, now there are only 16.

Our category: https://cs.wikipedia.org/wiki/Kategorie:%C3%9Adr%C5%BEba:Str%C3%A1nky_s_chybami_skript%C5%AF has 129 entries now, mostly false positives, I presume.

But I am able to find some new occurences of the error via the direct search for the error message: https://cs.wikipedia.org/w/index.php?search=The+entity+data+must+be+a+table+obtained+via+mw.wikibase.getEntityObject&searchToken=97spieqenqo68wr3ypfrpq5zy. The false positives are much rarer there.

I see that from about Jun 13, there are ongoing problems with WMF ParserCache - T167784, probably as consequence of https://gerrit.wikimedia.org/r/#/c/354504/. Could it be related? A real root cause for bugs like this and T168040? Both bugs seem to be a consequence of some parser cache failure. And the dates sum up (first reports for T168040 are before Jun 20, first reports here are from Jun 23 - at en-wiki).

The new error message was mentioned at en:WP:VPT and I replied that it is part of the "what exactly the problem is" patch mentioned by hoo above. I also noted that I recently purged articles with script errors, and it is currently adding about four articles per hour.

I thought something like that. So, that means that the "trigger" could be a casual page view, when the links tables are updated wrong way?

Not "wrong".

I would point to one thing which is particulary interesting: if I remember correctly, when this error occurs, every Wikidata entry on the page is affected and every time the same Lua error message is displayed. That's why i think that the actual bug may not be hidden in Lua scripts, but rather somewhere in parsing/caching process.

I doubt it's parsing/caching, it could as easily be explained by initialization data not being populated correctly due to T166348, then every call hits the bad initialization data. I'm still hoping that this goes away once T171166 is done.

I see that from about Jun 13, there are ongoing problems with WMF ParserCache - T167784, probably as consequence of https://gerrit.wikimedia.org/r/#/c/354504/. Could it be related? A real root cause for bugs like this and T168040? Both bugs seem to be a consequence of some parser cache failure. And the dates sum up (first reports for T168040 are before Jun 20, first reports here are from Jun 23 - at en-wiki).

Highly unlikely. As I said above, this doesn't seem to be any sort of parser cache failure.

It seems that there is no fast solution of this problem. Within the last month I made more than one thousand null-edits manually, and I did not get any support from the community. We need help to clear the Category:Pages_with_script_errors because we need this category for maintenance work. Now, this category is useless. Now I am checking the Linter errors, and you know: there is no support by the community.

Unfortunately, I have no experience with Python and Pywikibot.

Can anybody help me/us?

Of course. We have a button in "more" menu on category pages, "refresh". Clicking on it makes forceupdate nulledit on all pages in this category. Interested?

Yes, I am really interested.

mw.loader.using( [ 'mediawiki.util', 'mediawiki.api' ] ).then( function() {
	var step = 1;
	var count;
	var wait;
	function postPurge(cat, addParams) {
		var apiParams = $.extend({
			action: 'purge', 
			generator: 'categorymembers',
  			'gcmtitle': cat,
  			'gcmlimit': step,
			forcelinkupdate: 1
		}, addParams);
		new mw.Api().post(apiParams)
			.fail(function() {
				alert("Fail");
			})
			.done(function(d) {
			console.log(d);
			count += step;
			if (d.warnings === undefined && d["continue"] !== undefined
					&& d["continue"].gcmcontinue) {
				mw.notify(count + " pages were updated");
				setTimeout(function() {
						postPurge(cat, d["continue"]);
					}, wait);
			} else {
				alert("Done!");
				document.location.reload();
		}});
	}
	if (mw.config.get('wgNamespaceNumber') == 14) {
		wait = 1000;
		new mw.Api().get({
    		meta: 'userinfo',
    		uiprop: 'ratelimits'
		}).done(function(d) {
			if (d && d.query && d.query.userinfo && d.query.userinfo.ratelimits
    				&& d.query.userinfo.ratelimits.purge)
				wait = 2000;
				$(mw.util.addPortletLink('p-cactions', '#', 'refresh', 'pt-refresh'))
					.click(function() {
						count = 0;
						postPurge(mw.config.get('wgPageName')
							.replace(/_/g, " "));
					});
		} );
	}
});

Thanks. This script is helpful.

Now, or at least only now I can see it) we have 2 type of error:
Errore Lua in mw.wikibase.entity.lua alla linea 37: data.schemaVersion must be a number, got nil instead.
and
Errore Lua in mw.wikibase.entity.lua alla linea 34: The entity data must be a table obtained via mw.wikibase.getEntityObject.

The error is in the same field of the same template and use the property P473 in 2 different page

The first error is connected to an item without the property P473, but we have a local property so the template manage this. With purge it's OK:.https://it.wikipedia.org/wiki/Hinton_(Oklahoma)

The second error is connected to a item with the propery, With purge it's OK. (https://it.wikipedia.org/wiki/Kreischa)

For the statistics, in italian wiki we reached 2000 pages in a couple of days, now are less because I started to purge them with the script of @IKhitron (step=50)

@ValterVB, it's not mine, I just brought it here.
And I do not recommend you using step=50, because the script will fail if you are not sysop or bot, and can fail if you are.

@ValterVB are you sure that the "The entity data must be a table obtained via mw.wikibase.getEntityObject" errors are not old ones? The message "data.schemaVersion must be a number, got nil instead" is the result of the extra error checks we added to investigate this issue. I assumed it would now catch the problem a bit earlier, preventing the old "entity data must be a table" errors from showing up. But if we are still getting both errors, this is even more broken than I thought, and I have even less of a clue why.

Well, @ValterVB, I believe your script stops all the time with a message "Done", despite it's not done?

@daniel, pobably the error is old some days, because last complete purge was saturday, so the page was added to the category after saturday evening.
@IKhitron Message "Done "only at the end, but sometime I have the message "Fail", but I restart the script and it's all OK. and on it.wiki I'm admin.

Next time be pationed. If you would use 1, it would run once until end.

Vachovec1 added a comment.EditedWed, Jul 26, 6:19 PM

@daniel: I am not sure, but I am inclining to the version where both errors occurs independently. The category is completely unreliable (or the pages are reported with a long delay?). On cs-wiki someone probably just made a purge run through it (no affected pages are shown, not mentioning false positives), but when searching directly for the error massages, I can easily found about 100 pages with the "line 34" error message (almost no false positives) - that is much more then two days ago. About 20 pages with the "line 37" error message (again, almost no false positives). Independently of the type of the error message, the effect on the affected page is the same.

You can try to purge all the page with the old message, and check tomorrow if there is a new one, or more, @Vachovec1,

I purged :fr:Catégorie:Page avec des erreurs de script around miday UTC. Around 7 hours later there 364 pages in that category. I've opended the 200 first one's unconnected and 20 still displayed the message : "Erreur Lua dans mw.wikibase.entity.lua à la ligne 37 : data.schemaVersion must be a number, got nil instead." That's always the same data.schemaVersion error, never seen the one abont data itself.

List of page that displayed the message a few minutes ago if you want to try to see them :

Vachovec1 added a comment.EditedThu, Jul 27, 12:29 AM

You can try to purge all the page with the old message, and check tomorrow if there is a new one, or more, @Vachovec1,

OK, I purged/null edited everything. No positive search results now. We will see in 24 hours.

You forgot to add Lua!

MediaWiki-extensions-Lua has nothing to do with this task. That extension is a different approach that doesn't offer the rigorous sandboxing that Scribunto provides, and is not installed on Wikimedia wikis.

Vachovec1 added a comment.EditedThu, Jul 27, 11:10 PM

You can try to purge all the page with the old message, and check tomorrow if there is a new one, or more, @Vachovec1,

OK, I purged/null edited everything. No positive search results now. We will see in 24 hours.

Interesting. So yesterday I purged/null edited about 180 pages at cs-wiki found through direct search for specific error messages (about 140 pages for "line 34" error message, 40 for "line 27" error message). Mostly "true" errors, only about 20 percent false positives. 24 hours later, no "line 34" error messages are shown, but 6 new "line 37" error messages was found (4 "real" errors, 2 false positives). Nothing from that is shown in the related category (there are only 4 new false positives).

Summarizing:

  1. new cases of this bug are still appearing
  2. the "line 34" message is probably obsolete, it was replaced with new "line 37" error message (this is apparently an expected outcome of the patch above)
  3. the category supposed to catch the script errors is completely unreliable in this matter, it's much better to get a list of affected pages through a direct search for error message(s)

This is what I've got from digging lostash. Three types of error happen: 1- Saving a page fails because Lua engine returns false one example, you find tons of it 2- It gets propagated to ChangeProp an example 3- And refresh links job an example. I think the first type of error is the source of the other two.

I put the whole backtrace of the first type for debugging:

t exception.file	/srv/mediawiki/php-1.30.0-wmf.11/extensions/Scribunto/engines/LuaSandbox/Engine.php:318
t exception.message	Scribunto_LuaSandboxInterpreter::callFunction: LuaSandboxFunction::call returned false
t exception.trace	#0 /srv/mediawiki/php-1.30.0-wmf.11/extensions/Scribunto/engines/LuaCommon/LuaCommon.php(178): Scribunto_LuaSandboxInterpreter->callFunction(LuaSandboxFunction, array)
#1 /srv/mediawiki/php-1.30.0-wmf.11/extensions/Scribunto/engines/LuaCommon/SiteLibrary.php(91): Scribunto_LuaEngine->registerInterface(string, array, array)
#2 /srv/mediawiki/php-1.30.0-wmf.11/extensions/Scribunto/engines/LuaCommon/LuaCommon.php(512): Scribunto_LuaSiteLibrary->register()
#3 /srv/mediawiki/php-1.30.0-wmf.11/extensions/Scribunto/engines/LuaCommon/LuaCommon.php(149): Scribunto_LuaEngine->instantiatePHPLibrary(string, string, boolean)
#4 /srv/mediawiki/php-1.30.0-wmf.11/extensions/Scribunto/engines/LuaSandbox/Engine.php(37): Scribunto_LuaEngine->load()
#5 /srv/mediawiki/php-1.30.0-wmf.11/extensions/Scribunto/common/Hooks.php(125): Scribunto_LuaSandboxEngine->getResourceUsage(integer)
#6 /srv/mediawiki/php-1.30.0-wmf.11/includes/parser/Parser.php(3408): ScribuntoHooks::invokeHook(Parser, PPTemplateFrame_Hash, array)
#7 /srv/mediawiki/php-1.30.0-wmf.11/includes/parser/Parser.php(3133): Parser->callParserFunction(PPTemplateFrame_Hash, string, array)
#8 /srv/mediawiki/php-1.30.0-wmf.11/includes/parser/Preprocessor_Hash.php(1071): Parser->braceSubstitution(array, PPTemplateFrame_Hash)
#9 /srv/mediawiki/php-1.30.0-wmf.11/includes/parser/Preprocessor_Hash.php(1504): PPFrame_Hash->expand(PPNode_Hash_Tree, integer)
#10 /srv/mediawiki/php-1.30.0-wmf.11/includes/parser/Parser.php(3284): PPTemplateFrame_Hash->cachedExpand(string, PPNode_Hash_Tree)
#11 /srv/mediawiki/php-1.30.0-wmf.11/includes/parser/Preprocessor_Hash.php(1071): Parser->braceSubstitution(array, PPFrame_Hash)
#12 /srv/mediawiki/php-1.30.0-wmf.11/includes/parser/Parser.php(2948): PPFrame_Hash->expand(PPNode_Hash_Tree, integer)
#13 /srv/mediawiki/php-1.30.0-wmf.11/includes/parser/Parser.php(1304): Parser->replaceVariables(string)
#14 /srv/mediawiki/php-1.30.0-wmf.11/includes/parser/Parser.php(451): Parser->internalParse(string)
#15 /srv/mediawiki/php-1.30.0-wmf.11/includes/content/WikitextContent.php(329): Parser->parse(string, Title, ParserOptions, boolean, boolean, NULL)
#16 /srv/mediawiki/php-1.30.0-wmf.11/includes/content/AbstractContent.php(497): WikitextContent->fillParserOutput(Title, NULL, ParserOptions, boolean, ParserOutput)
#17 /srv/mediawiki/php-1.30.0-wmf.11/includes/page/WikiPage.php(2078): AbstractContent->getParserOutput(Title, NULL, ParserOptions)
#18 /srv/mediawiki/php-1.30.0-wmf.11/includes/api/ApiStashEdit.php(200): WikiPage->prepareContentForEdit(WikitextContent, NULL, User, string, boolean)
#19 /srv/mediawiki/php-1.30.0-wmf.11/includes/api/ApiStashEdit.php(148): ApiStashEdit::parseAndStash(WikiPage, WikitextContent, User, string)
#20 /srv/mediawiki/php-1.30.0-wmf.11/includes/api/ApiMain.php(1583): ApiStashEdit->execute()
#21 /srv/mediawiki/php-1.30.0-wmf.11/includes/api/ApiMain.php(546): ApiMain->executeAction()
#22 /srv/mediawiki/php-1.30.0-wmf.11/includes/api/ApiMain.php(517): ApiMain->executeActionWithErrorHandling()
#23 /srv/mediawiki/php-1.30.0-wmf.11/api.php(94): ApiMain->execute()
#24 /srv/mediawiki/w/api.php(3): include(string)
#25 {main}

HTH

Using Special:Search for mw.wikibase.getEntityObject shows 26,913 pages in arwiki
arwiki

I think the first type of error is the source of the other two.

I doubt errors in any of these places causes the errors in the other places. But the underlying bug in all three of those is T166348.

I doubt errors in any of these places causes the errors in the other places. But the underlying bug in all three of those is T166348.

Thanks for pointing out, in that case I couldn't find these errors in logsatsh or these two bugs are somehow related (just a wild guess, I have no idea how lua works).

Dvorapa added a subscriber: Dvorapa.EditedMon, Jul 31, 10:30 PM

On Czech Wikipedia (cswiki) there is not Lua error in mw.wikibase.entity.lua at line 34: The entity data must be a table obtained via mw.wikibase.getEntityObject, but instead there is an error Lua error in mw.wikibase.entity.lua at line 37: data.schemaVersion must be a number. got nil instead.. The bug is on cca 200 pages, you can see their list here (purged already). Blank edit works

Change 369323 had a related patch set uploaded (by Thiemo Mättig (WMDE); owner: Thiemo Mättig (WMDE)):
[mediawiki/extensions/Wikibase@master] Add mw.wikibase.entity.create() error message for empty tables

https://gerrit.wikimedia.org/r/369323

Change 369324 had a related patch set uploaded (by Thiemo Mättig (WMDE); owner: Thiemo Mättig (WMDE)):
[mediawiki/extensions/Wikibase@master] Skip cloning in mw.wikibase.entity.create() on first call

https://gerrit.wikimedia.org/r/369324

Change 369324 abandoned by Thiemo Mättig (WMDE):
Skip cloning in mw.wikibase.entity.create() on first call

Reason:
Whoops.

Being able to predict the future would be so cool. The code could then know if it is going to be called a second time.

https://gerrit.wikimedia.org/r/369324

Change 369612 had a related patch set uploaded (by Thiemo Mättig (WMDE); owner: Thiemo Mättig (WMDE)):
[mediawiki/extensions/Wikibase@master] Add more error messages when Lua code produces empty tables

https://gerrit.wikimedia.org/r/369612

Change 369323 merged by jenkins-bot:
[mediawiki/extensions/Wikibase@master] Add mw.wikibase.entity.create() error message for empty tables

https://gerrit.wikimedia.org/r/369323

Mr.Ibrahem added a comment.EditedTue, Aug 8, 1:05 AM

Today I see this at arwiki.
mw.wikibase.entity.lua on line 37 data.schemaVersion must be a number, got nil instead.

Using Special:Search for "data.schemaVersion must be a number" (with quotes) shows many of the mw.wikibase.entity.lua at line 37: data.schemaversion must be a number, got nil instead problems. That applies at enwiki (330 pages listed), but itwiki wins with over 5700 pages. Possibly the problem pages are noticed and purged more quickly at enwiki.

Today we detected inexplicable Lua/Wikidata errors. Maybe they have the same cause like this described in this task.

  • In Rocky Mountain National Park we got the message, that the Object ids "Q4735531, Q4878135" are not known. But these ids are not used in the article but existent at Wikidata.
  • In other cases like Chennai we got nil errors for the value of the property P856 (official website) although the value is set.

They are used indeed. See Q777183, P2872.

hoo added a comment.Wed, Aug 9, 4:30 PM

Today we detected inexplicable Lua/Wikidata errors. Maybe they have the same cause like this described in this task.

  • In Rocky Mountain National Park we got the message, that the Object ids "Q4735531, Q4878135" are not known. But these ids are not used in the article but existent at Wikidata.

This is a bug in some local Lua module: Apparently something tries to load the entity Q4735531, Q4878135 (these are not two errors, but one for comma separated id Q4735531, Q4878135).

  • In other cases like Chennai we got nil errors for the value of the property P856 (official website) although the value is set.

This is a bug in your Modul:Wikidata2's getProperty function: It assumes wd.getLabel().label to return non-nil, but it doesn't always (as label is sometimes taken from mw.wikibase.label which can be nil).

This is unrelated to the issues mentioned here.

Thanks for your answer. We will check for the cause in the scripts and why these errors occurred only today.

On it.wikipedia: 1900 pages in 24 hours, in category page. The problem is increasing? Now users often can see the red message in pages. I think that start to be a problem also for the readers.

I also believe it's became worth.

Perhaps you could use a bot to purge the articles in it.wikipedia if it's becoming a problem?

This is the solution? Purge the pages? I already do it but now we have an average of 80-90 pages per hours. Yesterday I emptied the category at 9:42 and now, 10 aug ad 19:45 the category have 3019 pages.

Of course not final solution but temporary solution. Finnish Wikipedia does have the same problem and there is a bot to purge pages once per hour that contains "data.schemaVersion must be a number" to keep number of these red error texts low as possible.

ksmith added a subscriber: ksmith.Fri, Aug 18, 5:17 PM

I saw this mentioned on wikitech-l. To summarize my understanding of the current state of this task: The hope is that fixing T171166: Build and push a new hhvm-luasandbox package will fix this, so effectively this task is blocked on that one. If that's not correct, please say so. If it is, should we set up a phab subtask dependency to reflect that blocking relationship?