I'm working with the Growth team on features that keep growing the number of Wikipedia contributors.
User Details
- User Since
- Oct 4 2021, 3:13 PM (219 w, 1 d)
- Availability
- Available
- LDAP User
- Sergio Gimeno
- MediaWiki User
- SGimeno (WMF) [ Global Accounts ]
Today
Regarding the instrumentation QA, as of today at 15h UTC the experiment is enabled in testwiki and the events can be inspected through the browser network tab or using the EventStreams app for the ones produced in the backend.
Yesterday
Fri, Dec 12
Why are we using the "compute-schedule-compute-again" approach? If we want the computation to happen within the job, wouldn't it make more sense to just schedule the job and let it compute?
I believe the reason is an intent to make the data generated from a web request and the data generated from the refreshUserImpactData maintenance script (T324675) more consistent. In 881414: Process more articles when fetching page view data two methods are introduced, one for requesting the page view data from a web request content and another from a job context. The job one attempts consecutive calls to page views service for 5min until it fetches up to 1k articles page view data.
Could you elaborate on the string $identifier parameter? In the test I can see $identifier = '0x0ff1c3'; but I don't know which are the responsibilities/assumptions callers should take care. The first surprise is the type string as my current shallow understanding is we'd prefer to work with MW user local or central IDs.
Tue, Dec 9
Mon, Dec 1
I don't think the page makes much sense if SE are not enabled. Would it be fine to conditionally register it?
Fri, Nov 28
Thu, Nov 27
Tue, Nov 25
Not sure why mwscript-k8s is stopping at afwiki when I run this:
Mon, Nov 24
The optimizations are now testable in beta, cc @AAlhazwani-WMF, feel free to resolve the task if you are satisfied with the design review.
Fri, Nov 21
This is my proposal to capture all possible on-boarding end interactions and context:
Thu, Nov 20
I cannot see data that tracks the correctness of the user's answers, is that something we should/could add to the click event that we track when the user clicks on "Get started"?
This sounds good given we're re-testing bucketing as-well while testing the instrumentation T405177.
Resolving based on QA feedback.
Wed, Nov 19
Should we keep tracking the notifications CTRs (or maybe a simplified version with just an aggregated CTR including primary and secondary links)? The development cost is almost none, building the dashboard seems the more time consuming. cc @KStoller-WMF
I don't think this task requires of any QA, resolving.
Reviewing the Figma spec, we've noticed that the border radius is not rounded for the rest of modules in the Mentor Dashboard. Is it intentional? From an engineering pov we would prefer a consistent solution where all of the borders have the same aspect.
Moving this to QA while I self-QA it and get @Iflorez's feedback.
Tue, Nov 18
I found an issue in the migration script while running it on dry-mode, for instance for eswiki this was the output:
There are changes:
Additions:
{
"Mentors": {
"5471712": {
"awayTimestamp": "20251118130934"
},
"1473120": {
"awayTimestamp": "20260623200305"
}
}
}
Deletions:
{
"Mentors": {
"5471712": {
"awayTimestamp": "2025-11-18T13:09:34Z"
}
}
}I forgot to update the script in T406701 and now it would save the values again using unix format rather than ISO. Providing a fix before running.
Mon, Nov 17
Nov 14 2025
Nov 13 2025
Nov 12 2025
@Iflorez, I have some questions about the instrumentation spec:
- Constructive edit rate: the spec for this event says edit (made on mobile web within a user's first 24 hours [where the edit is not reverted within 48 hrs]), I am assuming this restrictions will be ensured in post analysis rather than by the instrumentation itself, because I'm seeing time criteria applied in the constructive edit rate metric query, for instance INTERVAL 48h, is this assumption correct?
- Is the assumption above also correct for the rest of metrics that use the edit_save event data? Or said differently, the edit_save event needs to be recorded for all kinds of edits performed by users in the experiment sample (both groups) and during the experiment. Regardless of their account age, edit count, etc.
- The task rejection rate requires to record a page-visited event, however the task completion rate requires a Click on the RT card itself which launches the onboarding event. Is this intentional? Shouldn't the denominator for these two rates be the same one: clicks on the RT card? If not, could you clarify which "page visited" are we supposed to record?
Nov 10 2025
As I understand the comment it is assuming that we would be using these three schemas (homepagevisit, homepagemodule,newcomertasks), but it was requested from @Iflorez to use Test Kitchen as much as possible. So my idea was to use web base schema instead to ensure we remain on a low risk tier. Does this make sense?
I spotted a couple of issues with number format while testing the dialog in enwiki forcing the language to arabic using uselang=ar:
Nov 7 2025
This is very helpful @matmarex, thank you! I will dive into the details to try to give a rough plan and estimation to each of the proposed interventions.
Nov 6 2025
Nov 5 2025
Nov 4 2025
I think the particular text and interface we would want to modify is created in includes/EditPage/EditPage.php#2892, assuming it is ok for Growth engineers to work on that area of the code, I don't know how we'd conduct an experiment for a core feature. Being xLab/MetricsPlatform an extension the setup described in the docs https://wikitech.wikimedia.org/wiki/Experimentation_Lab/Conduct_an_experiment#Code is not directly actionable. Has xLab been used for an A/B test in core before? I vaguely remember Sam mentioning a PHP version A/B test setup, but Idk if that example is even relevant to what we want to do here. cc @mpopov @phuedx
Oct 31 2025
Oct 30 2025
Oct 29 2025
I run it manually and it finished correctly, tentatively resolving,


