Okay, I made the new labeling campaign and it works fine \o/ http://labels.wmflabs.org/ui/wikidatawiki/
Will talk to @Lea_Lacroix_WMDE and @Lydia_Pintscher about communication to ask community to label these edits.
Fri, Jul 13
Honestly, Defining how it should look is hard, the Lemma text should have interface-independent direction and the vertical bar should be put in the right place. Practically, They need to get very close to the edit button.
Thu, Jul 12
I don't have time to work on this. This needs clean up of the svg file.
Wed, Jul 11
Since Remex is now deployed on French Wikipedia.
+2 rights on deployment-related repos are tightly entangled with the related production rights (for example a person with +2 right on mediawiki-config repo must be a deployer, otherwise the +2 right is useless and makes more harm than good. This is the case with operations/puppet as well. The person holding +2 right must be member of ops ldap group otherwise there is basically nothing (s)he can do or react in case of mistakes which can cause downtime for half an hour (the time it takes to re-run puppet config automatically) and since this right involves sudo rights on everything and access to the only private repo we have in prod (passwords of node, SSL certificates, etc.) very very few people have it, basically WMF SREs. I would love WMDE to have SRE but that's something else.
Giving the *is bot/was bot* take precedence seems the best approach to me. Will make a patch.
Tue, Jul 10
I loaded it in the wikilabels and started labeling but I encounter a funny problem. Most of the edits are okay and made by bots of users who go blocked (case that happens so often is MechQuesterBot) Should we do another round of autolabeling but with ignoring the block condition? That would drop 4.9K need_review out 6.6K cases which means we probably need to go back to using the 500K sample to get a 5k sample for review.
Mon, Jul 9
I added the new campaign to the PR: https://github.com/wiki-ai/articlequality/pull/63/commits/32337bd97ffa66a8e2876d214d73670c370b111f Please take a look
I think there should be some integration but I disagree about this approach for several reasons: 1- human judgments can be vandalism or all sorts of wrong issues 2- This could integrate to online learning or batches that later be used in to retrain ORES models but this method would show false predications from.
I think a great integration here would be a way to find cases that ORES and JADE disagree and investigating it why.
I'm all for it.
My friend took it over and finished it \o/
Thu, Jul 5
Did you try PSR-16 instead? Some related discussion
Wed, Jul 4
This is not needed anymore. Let's just close it.
It has been done a while ago.
hmm, yeah. I think we should make another phabricator ticket because that's about the ores service and not the extension way of handling unorthodox responses.
I checked and those are time out errors which is better to be retried and they usually pass in the second or third try. We can reduce the maximum number of retries if it's still too high.
Tue, Jul 3
Qunit tests pass on my localhost and not here, it's driving me crazy. I'm calling it a day. Feel free to pick this up until the week after next week
I'll do it tomorrow.
It's going to be deployed today. I'm using T194950 for it.