Page MenuHomePhabricator

runJobs not following redirect
Closed, ResolvedPublic


The new Special Page for jobs doesn't follow redirects.

[runJobs] Running 1 job(s) via '/w/index.php?title=Special%3ARunJobs <snip>
[runJobs] Failed to start cron API: received 'HTTP/1.1 302 Found

I can get around it my mucking about with redirect rules, but I'd rather not do that.

Version: 1.23.0
Severity: normal



Event Timeline

bzimport raised the priority of this task from to Normal.
bzimport set Reference to bz66485.
bzimport added a subscriber: Unknown Object (MLST).

In my 1.23.0 installation it doesn't return a 302, and my wiki has $wgLanguageCode = "es", so normally request to Special: go to Especial:, but it doesn't redirect for me when called internally.

excerpt of my apache error_log: - - [12/Jun/2014:16:07:33 +0200] "POST /index.php?title=Special%3ARunJobs&tasks=jobs&maxjobs=1&sigexpiry=1402582058&signature=fbc222296801ed1d296b6656fd5c6c9d8d6bfcd4 HTTP/1.1" 202 - "-" "-"

Do you have rewrite rules or something like this configured on the server? Did you manage to get where it redirects to?

I managed to put in a RewriteCond that catches the first entry, but it still tosses out the same error at the end because it got a 302 somewhere.

You should:

  • See the access_log of apache (or the webserver you use) and look for those redirects
  • Set an appropriate RewriteLog and RewriteLogLevel directives to debug your rewrite rules.

You can also access that URL directly from CURL on the same server to debug where it's going to redirect you.

Ralfk added a subscriber: Ralfk.EditedApr 9 2015, 9:36 AM

I can confirm this problem on two of my wikis. On both wikis I have a redirect from HTTP to HTTPS configured for all connections. So, I would appreciate if this problem could be fixed. The POST should just follow redirects. Or we should be able to configure the API access to HTTPS.

BTW: Proper Apache rewrite conditions for bypassing HTTP to HTTPS redirects for localhost clients are:

RewriteCond %{REMOTE_HOST} !$
RewriteCond %{REMOTE_HOST} !::1$
Aklapper lowered the priority of this task from Normal to Lowest.Apr 9 2015, 12:35 PM

I cannot understand the assignment of priorities here. This bug is a completely silent footgun, destroying parts of the Wiki functionality for people that configure their server for strict security. It just took me >1h to debug why sometimes, nondeterministically, updates of templates are not properly propagated to the pages using these templates, and even calling "runJobs" later would not fix this. Turned out that for several months now, MediaWiki sent the job updates to the wrong place, got back an error, *logged that error* (well, if debugging was enabled), and still considered the job as being executed. I have a redirect from "http://" to "https://" set up, and MediaWiki was stuck on this without telling anybody.

The default config should be changed to disable async jobs until they can properly support HTTP+HTML, i.e., follow redirects and do HTTPS. Or implement them by letting the client send an async request via JS - at least he client knows the actual server etc. it is talking to, and it supports the necessary protocols.

But at the very least, if the job comes back not having executed (any status code except for 200), MW *must not* consider this job as finished. That's just plain wrong, and dangerous.

Restricted Application added a subscriber: Aklapper. · View Herald TranscriptOct 17 2015, 12:13 PM
Aklapper raised the priority of this task from Lowest to Low.Oct 23 2015, 7:49 PM

Worked around this with a cronjob executing runJobs and turning off all async calls, but this is a very nasty bug.

I've had runJobs going as a cronjob for a while, but I wasn't aware there were potential template updates that aren't making it through. I'll have to look into that some.

Another instance of this bug: Topic:T7v179ekcmj28vt2

If nobody is going to fix this, the default settings for MediaWiki should be safe enough to always work, so I propose to turn off async job runs by default ($wgRunJobsAsync = false;) so at least it works on all environments, and people that want to improve performance can enable them if that works for them. The current behavior is too buggy.

Change 306154 had a related patch set uploaded (by Aaron Schulz):
Reduce problems caused by $wgRunJobsAsync

Change 306154 merged by jenkins-bot:
Reduce problems caused by $wgRunJobsAsync

aaron closed this task as Resolved.Aug 23 2016, 4:27 PM
aaron claimed this task.

Change 307798 had a related patch set uploaded (by Aaron Schulz):
Always fail over to sync jobs when Special:RunJobs fails

Change 307798 merged by jenkins-bot:
Always fail over to sync jobs when Special:RunJobs fails