Oct 12 2017
Dec 18 2016
Speak of the devil :p
@Kghbln Any insight as to when REL1_28 backport might get merged?
Dec 15 2016
MariaDB [metawiki]> select * from cw_wikis where wiki_dbname="reriawiki" or wiki_dbname="pnpwiki" or wiki_dbname="ronuoswiki" or wiki_dbname="reinventandolasorganizacioneswiki" or wiki_dbname="rwbyfrwiki" or wiki_dbname="rodintestonewiki" or wiki_dbname="rylnbgwiki" or wiki_dbname="ryinbgwiki";
Empty set (0.00 sec)
Dec 9 2016
@Dzahn Please let me know if you need anything else from us. All APIs should be working at whatever they're $wgServer is set to in our configuration.
Dec 8 2016
@Dzahn all urls are suffixed with wiki (mediawiki grants are to like '%wik%'.* or something).
@Dzahn Shouldn't be an issue but I copy/pasted MySQL output to be sure I didn't typo the DB name before saying it's deleted. Please only purge links associated with unknown databases below.
Dec 6 2016
@Dzahn Sites (at their configured URIs) have been accessible sporadically.
@Dzahn you might be able to check now. I'm hoping there's no underlying issues with our database migration (besides one I've noticed).
All of the Miraheze wikis should be back to normal, allthetropes.org has $wgReadOnly set, although the link you sent for the API worked.
Aug 6 2016
Note to everyone this issue was resolved (although 'invalid' seems more appropriate) on Miraheze simply by upgrading our CentralAuth submodule in our REL1_27 MediaWiki branch. I'm not sure exactly what the issue is, but all our wikis are now running REL1_27 with updated extensions.
Jun 11 2016
Thanks for the sub :)
Glad to see that we ended up getting a fix for this. It actually appeared that just the namespace content model change fixed the broken pages that I looked as so I'm not exactly sure what the maintenance script did, but glad everything appears to be working.
Apr 13 2016
Taking a wild guess but probably intentional.
Dec 28 2015
This is in response to this GH issue, which we tried to handle internally; or something.
Oct 5 2015
I too now see this on google chrome, logged in and logged out
Sep 17 2015
Thank you for this. Wouldn't like Travis jobs to fail (again) because our site is down with a different error code.
Sep 14 2015
Ah true. Maybe something in Travis or your script could check the HTTP code? Pretty sure 200 would be optimal but some things (timeout, for instance) are likely not your fault. Maybe things like 301 etc should still error but gateway / site timeout shouldn't.
@Addshore I raised the security level for our domain on CF. Might wanna take a look.
Ps sorry for being new to phabricator :p
Oh okay I understand now. Well a better error message would be useful, should I close this when our site is working? (Like i said, I've used pwb with Orain testwiki)
Sep 13 2015
If this issue is with orain 522ing the task might be invalid? I'm not sure but if it's just an issue with our site being down that's not a problem with the code. The site is actuall erroring.
@jayvdb Orain's loadbalancer is currently being null routed by our host after cloud flare decided to pass a long 1GB of uncached requests to our servers in an hour.
Orain op here
Sep 1 2015
https://github.com/Orain/ansible-playbook/pull/720 hopefully fixed it
unlike github, should wait here before closing this IMO, but I feel rather silly for this :p
@Aklapper I've replied to a question on that github issue related to our config settings, might be interested
@Aklapper definitely related. More or less when it started
@Arcane21 I'm fairly certain that cron job was removed ~10 days ago by addshore
I've manually been running the LC rebuild on extloadwiki 2-3 times a day for a week every day I'm home
Aug 31 2015
Jul 7 2015
I've gone ahead and given the account the bot flag. I'll watch test.orain.org and watch bot edits. Note that this is still not approval, just.. Well, it is a bot :p