Page MenuHomePhabricator

Cannot delete two pages with large histories even having the appropriate permissions to do so
Closed, ResolvedPublic

Description

Hello. I'm trying to handle this request to have some pages with large histories deleted. As steward, I have the bigdelete permission in my global group. I however cannot delete it the normal way (clicking on the delete tab) because the operation is aborted as it takes a lot of time (obviously, a page with ~50 revids takes time to delete). But I can't delete it either via API, because it always gives me an obscure "HTTP error".

API parameters I tried, and which worked with two of them were:

/w/api.php?action=delete&format=json&maxlag=5&servedby=1&curtimestamp=1&title=<pagename here>&reason=<reason here>&token=<token here>&utf8=1

Pages unable to delete are: this and this other.

Pages that I successfully deleted are listed here.

I asked for help on wikimedia-operations, and @jcrespo said to better ask here.

Event Timeline

Restricted Application added a subscriber: Aklapper. · View Herald TranscriptSep 14 2016, 10:36 AM
jcrespo added a subscriber: aaron.

@Anomie If you can give a look at this (I am myself a bit lost) and we are blocking a valuable contributor for doing maintenance.

CCing @aaron in case this could be related to the stricter defaults for transactions he has been working on.

Here it is the error from kibana:

"{""id"":""V9klagpAEDUAAYgTTugAAACE"",""type"":""DBTransactionError"",""file"":""/srv/mediawiki/php-1.28.0-wmf.18/includes/db/loadbalancer/LoadBalancer.php"",""line"":1139,""message"":""Para evitar la creación de lentitud alta de respuesta, la transacción fue abortada po",1
{ "file": "/srv/mediawiki/php-1.28.0-wmf.18/includes/db/loadbalancer/LoadBalancer.php", "line": 1592, "function": "Closure$LoadBalancer::approveMasterChanges", "args": [ "DatabaseMysqli" ] }, { "file": "/srv/mediawiki/php-1.28.0-wmf.18/includes/db/loadbalancer/LoadBalancer.php", "line": 1149, "function": "forEachOpenMasterConnection", "class": "LoadBalancer", "type": "->", "args": [ "Closure$LoadBalancer::approveMasterChanges;1919871024" ] }, { "file": "/srv/mediawiki/php-1.28.0-wmf.18/includes/db/loadbalancer/LBFactory.php", "line": 220, "function": "approveMasterChanges", "class": "LoadBalancer", "type": "->", "args": [ "array" ] }, { "file": "/srv/mediawiki/php-1.28.0-wmf.18/includes/db/loadbalancer/LBFactoryMulti.php", "line": 419, "function": "Closure$LBFactory::forEachLBCallMethod", "args": [ "LoadBalancer", "string", "array" ] }, { "file": "/srv/mediawiki/php-1.28.0-wmf.18/includes/db/loadbalancer/LBFactory.php", "line": 223, "function": "forEachLB", "class": "LBFactoryMulti", "type": "->", "args": [ "Closure$LBFactory::forEachLBCallMethod;2108279822", "array" ] }, { "file": "/srv/mediawiki/php-1.28.0-wmf.18/includes/db/loadbalancer/LBFactory.php", "line": 286, "function": "forEachLBCallMethod", "class": "LBFactory", "type": "->", "args": [ "string", "array" ] }, { "file": "/srv/mediawiki/php-1.28.0-wmf.18/includes/MediaWiki.php", "line": 563, "function": "commitMasterChanges", "class": "LBFactory", "type": "->", "args": [ "string", "array" ] }, { "file": "/srv/mediawiki/php-1.28.0-wmf.18/includes/api/ApiMain.php", "line": 526, "function": "preOutputCommit", "class": "MediaWiki", "type": "::", "args": [ "DerivativeContext" ] }, { "file": "/srv/mediawiki/php-1.28.0-wmf.18/includes/api/ApiMain.php", "line": 482, "function": "executeActionWithErrorHandling", "class": "ApiMain", "type": "->", "args": [] }, { "file": "/srv/mediawiki/php-1.28.0-wmf.18/api.php", "line": 83, "function": "execute", "class": "ApiMain", "type": "->", "args": [] }, { "file": "/srv/mediawiki/w/api.php", "line": 3, "function": "include", "args": [ "string" ] }

It certainly seems like it is hitting a write timeout limit.

Removing MediaWiki-API, since this has nothing to do with the API itself.

CCing @aaron in case this could be related to the stricter defaults for transactions he has been working on.

The error quoted certainly looks that way, it's the Spanish-language text of the 'transaction-duration-limit-exceeded' error thrown when a DB transaction takes longer than 'maxWriteDuration' seconds. The database writes themselves succeeded without error, it just took too long so Aaron's code rolled it back.

Chances are this particular transaction takes a long time simply because the pages in question have 45768 and 48615 rows needing to be moved from the revision table to the archive table.

"{""id"":""V9klagpAEDUAAYgTTugAAACE"",""type"":""DBTransactionError"",""file"":""/srv/mediawiki/php-1.28.0-wmf.18/includes/db/loadbalancer/LoadBalancer.php"",""line"":1139,""message"":""Para evitar la creación de lentitud alta de respuesta, la transacción fue abortada po",1

We normally have error messages log in English rather than whatever language the triggering user might have set.

Is there any way via API sandbox (maxlag / maxage / smaxage) which can be used to circunvent that restriction?

@MarcoAurelio Translated, this means that the change now it is safe to be done normaly, as it prevents from causing excessive lag. However, it seems it could be blocking it from being done for being so large. We will see what can we do instead, if we can temporarily increase the timeout or run a maintenance script to do it. I will wait for Aaron opinion on this.

aaron added a comment.EditedSep 14 2016, 5:20 PM

There is a DeleteBatch maintenance script that could take a page via stdin or a list of pages in a file and delete them. There will be a lag bump though.

The full logs for those deletion attempts are at https://logstash.wikimedia.org/goto/71a4fabe5a5f021bea9c073e32556144 and involve an affected row count near 100,000.

aaron closed this task as Resolved.Sep 14 2016, 5:56 PM
aaron claimed this task.

I deleted both now.

Thank you. Is it possible to grant more limits to stewards when performing bigdeletions?

I know this bug is old and resolved, but I stumbled on it by accident while looking for something else ... for future reference, refreshing the page in your browser should restart the deletion, and doing that enough times should make it go through. I know that undeletions for history merges work that way.