Thu, Aug 22
This oversampling is definitely working correctly.
I've checked all the outstanding issues and we're ready to go.
- We are almost always logging the saved revision ID. Since 12 July, around 96% of save success events in each interface have included new revision IDs (that is, ones greater than those in previous events in the session). I've filed T231024 since it would be good to investigate it at some point, but the issue is rare enough that it won't significantly affect the analysis of this test.
- We have not affected non-test wikis. We have not recorded any events in the default-visual or default-source bucket at non-test wikis, and the rate of mobile visual edits at non-test wikis did not increase when we deployed the bucket.
- Oversampling is no longer affecting the edit completion rate. Since 12 July, when we started oversampling all the mobile wikitext sessions at our target wikis, the relationship between edit completion rate in the two buckets has been the same whether the oversampled sessions are included or not. Although this still suggests that our sampling has some issues, we can be confident that they won't affect the A/B test since we're logging all the relevant data rather than sampling. I will deprioritize T227931 but keep it open for possible future work.
Wed, Aug 21
@fdans I checked it over and we can archive/delete it—there's nothing we need to keep.
Tue, Aug 20
Mon, Aug 19
Let me just support Maya's request here. I work primarily in JupyterLab, but I still use Hue frequently for various things:
- Running quick queries or exploring the Data Lake (since Hue has a nice graphical table explorer, autocompletion, and a query history)
- Checking Oozie workflows and jobs
@elukey this is meant as something for y'all to do (I mentioned it in our last hangtime). We users don't have the ability to force upgrade everyone 😁
Wed, Aug 14
Thu, Aug 8
A couple of initial notes:
- This is not blocking any of our planned analysis of VE-as-default or edit cards.
- I imagine the intention behind logging the ID of the base revision (as opposed to logging the ID of the new revision in saveSuccess events) is to help us pinpoint base revisions that are breaking the editor somehow, but I doubt we've ever actually used it. So maybe we should just stop doing it. @DLynch, @Esanders, any thoughts?
Wed, Aug 7
Fri, Aug 2
All working now! Thanks to @Ottomata for the pointer and to @mforns for proactively checking in when me after he noticed all the failing jobs! And of course, thanks to @chelsyx for a nice, detailed set of instructions and good opportunity to learn how Oozie works :)
Tue, Jul 30
Thanks for the ping @Jdforrester-WMF!
Mon, Jul 29
Implemented in this commit (with some follow-on clean-up and tweaks).
Jul 25 2019
The revert rate (proportion of edits that are fully reverted within 48 hours) of mobile visual edits is high, but declined 17% between June 2018 and June 2019. As a comparison, the revert rate of mobile wikitext edits did not decline over the same period.
The number of mobile VE edits increased 94% from June 2018 to June 2019. Interesting, this was much more than mobile wikitext edits increased, even though we did not put any significant effort into routing more editors into mobile VE.
Jul 24 2019
Good work, @Iflorez! I assume you're planning to post the results once the query finishes?
Jul 23 2019
@ppelberg thanks for starting a great list!
Jul 22 2019
Jul 19 2019
Okay, we've taken care of all the issues with the notebook and the publication script.
@chelsyx I reran update_publish_notebook.sh and it looks like all the problems are solved except the script not being able to move the HTML notebook to the published-datasets folder even though you changed the permissions.
Jul 18 2019
Worked correctly for me on Chrome 75.0.3770.142 and Firefox 69.0b4.
@ppelberg what do you need from me here?