Page MenuHomePhabricator
Search Open Tasks
Use the application-specific Advanced Search for better results and additional search criteria: Tasks, Commits. (More information)
    • Task
    The SSL client certificate for jschoenig will expire on 20260227-200035. Please work with them to get the new certificate issued and installed.
    • Task
    The SSL client certificate for ppenloglou will expire on 20260227-170820. Please work with them to get the new certificate issued and installed.
    • Task
    The SSL client certificate for hahmed will expire on 20260227-160257. Please work with them to get the new certificate issued and installed.
    • Task
    This is a discussion for engineers long term and not a priority. We would like to explore how we can match what we are currently able to track in `event.android_install_referrer_event` for Android with iOS data. We use this Android data to track CBN Campaign install rates and to track user churn from first install, not having this data from iOS has been a longstanding limitation due to iOS not giving the app an official “install” or “post-install” callback. We may not be able to get parity with Android but would be good to get a 'first time launch' event or something similar Ideally this data is eventually included in the new planned Apps Authorization apps_base data stream.
    • Task
    The buildContentNavigationUrlsInternal method is many lines long making it hard to reason with. Refactoring it would support optimizing skin generation by allowing us to only generate menus when needed in {T331360}
    • Task
    Create Instrumentation Event Documentation and Instrumentation Planning and Spec docs for planned data migration from current bifurcated datasets used by Android and iOS to unified, shared data stream.
    • Task
    In late planning stage - Instrumentation for new data collection plan is in subtask. Android and iOS apps currently track account creation and login events on different schemas/data streams, with some gaps that need to be addressed. We are planning to move both to the shared `apps_base` stream, with unified event naming/tracking. This will also include hCaptcha logging and other logging events related to account validation as needed. Current iOS Data: `event.ios_login_action` Current Android Data: `event.android_create_account_interaction` `event.android_install_referrer_event` [[ https://docs.google.com/spreadsheets/d/1ceK5xVvzmikRpDyNgUQKR0gWRQiG9pdP4y8dAoNYr20/edit?gid=0#gid=0 | Data/Queries we frequently uise ]]
    • Task
    This task involves the work of deciding how to handle cases where a {nav Add citation} suggestion appears for a sentence that precedes a quote that includes a citation. The above as prompted by a case wherein someone noticed a suggestion appearing for a sentenced that followed the following structure: `"x .... said:" followed by quote of what the person had said and at the end of the quote, a citation.` === Story === Related - {T406761} - {T401968} --- //Thank you to @leila for noticing and reporting this case.//
    • Task
    ### Description The randomizer button is appearing with a border. ### Requirements [] Adopt surface button (no glass effect) ### Acceptance Criteria [] No border [] Dice animation still works ### Design References Issue: https://www.figma.com/design/nXIizaWUKh7yuS52LrTkch/Liquid-Glass-Redesign-version?node-id=6330-25503&t=hvbPpb07Whm8IO5G-4 Fix: https://www.figma.com/design/nXIizaWUKh7yuS52LrTkch/Liquid-Glass-Redesign-version?node-id=6330-25502&t=hvbPpb07Whm8IO5G-4 ### Platforms [X] iPhone [X] iPad
    • Task
    ### Description Whenever possible - but mainly in explore feed, it would be nice to hide top bar elements on scroll - in this way the interface is cleaner for browsing and discovering. On scroll down they are back. ### Requirements [] Hide top bar elements (Wikipedia, tabs, profile, search) on scroll on Explore (Match article view behavior) [] Display them again upon fast scroll up (Match article view behavior) [] Collapse Wikipedia logo into W upon scroll ([[ https://www.figma.com/design/nXIizaWUKh7yuS52LrTkch/Liquid-Glass-Redesign-version?node-id=6169-27467&t=xyFPvuZyyPl8TBXX-4 | figma ]]) ### Design References https://www.figma.com/design/nXIizaWUKh7yuS52LrTkch/Liquid-Glass-Redesign-version?node-id=6169-27466&t=isSy2lPQDXUx2pLD-4 ### Platforms [X] iPhone [X] iPad
    • Task
    This is the offboarding ticket for Alex per docs for offboarding an SRE: https://wikitech.wikimedia.org/wiki/SRE_Offboarding#Phabricator_ticket [] update LDAP permissions based on NDA status [] update Phabricator permissions based on NDA status [] update [https://github.com/orgs/wikimedia/people github] access based on NDA status [] Check HBase/Hadoop permissions and inform the SRE analytics team [] update user in [[https://github.com/wikimedia/puppet/blob/production/modules/admin/data/data.yaml | modules/admin/data/data.yaml]] [] run the logout cookbook Additional task for SRE team members [] Review access to internal IRC channels [] Remove from ops mailing lists (ops and ops-private) [] Remove from private Exim aliases [] Remove VictorOps and OnCallOptimiser users [] Remove Icinga user [] Remove from pwstore [] Review access to network devices (and potentially remove access) [] Remove Kerberos principal (if present)
    • Task
    ### Description The current empty state feels a bit disjointed and not as clear as to what the primary action is. With a few elements here, it's easy to tell that this is not native and we should prioritize making clear what we want users to do. ### Requirements For all empty states in the app: [] Update buttons to native with and ensure Wikipedia colors are applied [] Update typography to be consistent with the rest of the app ### Acceptance Criteria - No custom elements ### Design References Issue Saved Reading lists empty state: https://www.figma.com/design/HoyTB8udiGRPQ4jTzhGy4d/Liquid-Glass?node-id=3971-503&t=2snKf2XnwFFQ8mpv-1 Issue Watchlist: https://www.figma.com/design/nXIizaWUKh7yuS52LrTkch/Liquid-Glass-Redesign-version?node-id=6243-28653&t=6YfSlcfY4Dk4eOPS-4 Redesign Watchlist: https://www.figma.com/design/nXIizaWUKh7yuS52LrTkch/Liquid-Glass-Redesign-version?node-id=6307-38544&t=6YfSlcfY4Dk4eOPS-4 ### Platforms [X] iPhone [X] iPad
    • Task
    ### Description Current overflow menu looks more like a popover. We should use the menu component instead ### Requirements [] Update overflow menus throughout the app to use the menu component [] Add icons to Explore overflow menu ([[ https://www.figma.com/design/nXIizaWUKh7yuS52LrTkch/Liquid-Glass-Redesign-version?node-id=6248-24757&t=5HD2BWHD9GynIWwY-4 | figma ]]) ### Acceptance Criteria - Users can close overflow menu by tapping outside of it - No custom elements ### Design References Issue: https://www.figma.com/design/nXIizaWUKh7yuS52LrTkch/Liquid-Glass-Redesign-version?node-id=6248-23364&t=1LIV4m21nEycq8SO-4 Redesign: https://www.figma.com/design/nXIizaWUKh7yuS52LrTkch/Liquid-Glass-Redesign-version?node-id=6248-24757&t=1LIV4m21nEycq8SO-4 ### Platforms [X] iPhone [X] iPad
    • Task
    ### Description There is a bug in the experience, and it has outdated elements ### Requirements [] Fix display bug where several elements are overlapping [] Remove top article toolbar from this view (Back button, W, Tabs, Profile) [] Update buttons and elements to liquid glass ### Acceptance Criteria - All elements are readable, not overlapping - No custom elements ### Design References Bug: {F72085724} Essential functionality: {F72085727} Issue: https://www.figma.com/design/nXIizaWUKh7yuS52LrTkch/Liquid-Glass-Redesign-version?node-id=6274-27276&t=URk71RMXn4KXyMux-4 Redesign: https://www.figma.com/design/nXIizaWUKh7yuS52LrTkch/Liquid-Glass-Redesign-version?node-id=6272-26939&t=URk71RMXn4KXyMux-4 ### Platforms [X] iPhone [X] iPad
    • Task
    ### Description The tabs used, specifically for languages within the search experience, are far from native, causing somewhat of a disjointed interaction and odd experience. We should the smaller pill version of the button. ### Requirements [] Use native pill-shaped buttons for languages in Search [] If users has more languages than fit on screen, they can scroll horizontally to view all languages [] Language pills are visible while scrolling through search results ### Acceptance Criteria - Maintains ability to add new langauge from Search entry point - No language selectors appear when user only has 1 language - No custom elements ### Design References Issue: https://www.figma.com/design/HoyTB8udiGRPQ4jTzhGy4d/Liquid-Glass?node-id=3905-53&t=2snKf2XnwFFQ8mpv-1 Fix: https://www.figma.com/design/nXIizaWUKh7yuS52LrTkch/Liquid-Glass-Redesign-version?node-id=6161-24729&t=1LIV4m21nEycq8SO-4 ### Platforms [X] iPhone [X] iPad
    • Task
    == Background One of the big successes of dark mode was that it drove a lot of Codex design token adoption as interfaces that used custom colors broke in dark mode. In a few select cases we worked around this with the expectation that over time we would replace those custom colors with dark mode. Back in December we introduced a public mixin called [[ https://gerrit.wikimedia.org/r/c/mediawiki/core/+/1216619/10/resources/src/mediawiki.less/mediawiki.mixins.less | darkmode-custom-fix ]] which is currently only used by DiscussionTools. We'd like to discourage its use as this could move us towards a pattern that's harder to maintain on the long-term (for example: deprecating mixins has been very difficult historically) and we should not be encouraging extension developers to favor "custom dark fixes" over evolving with the Codex design system. == User story As a developer I want to be guided about the best way to integrate dark mode for my extension/skin. == Requirements [] Use of the mixin should trigger a stylelint warning. https://github.com/wikimedia/stylelint-config-wikimedia/pull/256 [] The mixin is named before it sees further adoption [] DiscussionTools is patched to use the new name === BDD - For QA engineer to fill out === Test Steps - For QA engineer to fill out == Design - Add mockups and design requirements == Acceptance criteria - Add acceptance criteria == Communication criteria - does this need an announcement or discussion? - Add communication criteria == Rollback plan - What is the rollback plan in production for this task if something goes wrong? //This task was created by Version 1.2.0 of the [[ https://mediawiki.org/w/index.php?title=Reading/Web/Request_process | Web team task template ]] using [[ https://phabulous.toolforge.org/ | phabulous ]] //
    • Task
    == Requestor provided information and prerequisites == **Complete ALL items below as the individual person who is requesting access:** * Wikimedia developer account username: HMonroy * Email address: hmonroy@wikimedia.org * SSH public key (must be a separate key from Wikimedia cloud SSH access): ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDpAY7SjRAKEu5fnQt+66helFANk91AfDXbDUX6KEB2W hmonroy@wikimedia.org * Requested group membership: `analytics-privatedata-users` * Reason for access: Need access to the stats hosts, Data Lake, Kerberos in order to gather wishlist engagement data and make it visible to Superset. T413123 * Name of approving party (manager for WMF/WMDE staff): @KSiebert * Ensure you have signed the L3 Wikimedia Server Access Responsibilities document: Done * Please coordinate obtaining a comment of approval on this task from the approving party. == SRE Clinic Duty Confirmation Checklist for Access Requests == This checklist should be used on all access requests to ensure that all steps are covered, including expansion to existing access. Please double check the step has been completed before checking it off. **This section is to be confirmed and completed by a member of the #SRE team.** [] - User has signed the L3 Acknowledgement of Wikimedia Server Access Responsibilities Document. [] - User has a valid NDA on file with WMF legal. (All WMF Staff/Contractor hiring are covered by NDA. Other users can be validated via the NDA tracking sheet) [] - User has provided the following: developer account username, email address, and full reasoning for access (including what commands and/or tasks they expect to perform) [] - User has provided a public SSH key. This ssh key pair should only be used for WMF cluster access, and not shared with any other service (this includes not sharing with WMCS access, no shared keys.) [] - The provided SSH key has been confirmed out of band and is verified not being used in WMCS. [] - access request (or expansion) has sign off of WMF sponsor/manager (sponsor for volunteers, manager for wmf staff) [] - access request (or expansion) has sign off of group approver indicated by the approval field in data.yaml For additional details regarding access request requirements, please see https://wikitech.wikimedia.org/wiki/Requesting_shell_access
    • Task
    Since at least 19 January (oldest logs at time of writing), this has been spamming syslog on the codesearch9 vm ``` 2026-01-19T00:00:51.145614+00:00 codesearch9 confd[4140034]: 2026-01-19T00:00:51Z codesearch9 /usr/bin/confd[4140034]: INFO SRV record set to _etcd-client-ssl._tcp.codesearch.eqiad1.wikimedia.cloud 2026-01-19T00:00:51.147887+00:00 codesearch9 confd[4140034]: 2026-01-19T00:00:51Z codesearch9 /usr/bin/confd[4140034]: FATAL Cannot get nodes from SRV records lookup _etcd-client-ssl._tcp.codesearch.eqiad1.wikimedia.cloud on 172.20.255.1:53: no such host -- 2026-01-19T00:01:01.370863+00:00 codesearch9 confd[4140040]: 2026-01-19T00:01:01Z codesearch9 /usr/bin/confd[4140040]: INFO SRV record set to _etcd-client-ssl._tcp.codesearch.eqiad1.wikimedia.cloud 2026-01-19T00:01:01.380739+00:00 codesearch9 confd[4140040]: 2026-01-19T00:01:01Z codesearch9 /usr/bin/confd[4140040]: FATAL Cannot get nodes from SRV records lookup _etcd-client-ssl._tcp.codesearch.eqiad1.wikimedia.cloud on 172.20.255.1:53: no such host -- 2026-01-19T00:01:11.629049+00:00 codesearch9 confd[4140048]: 2026-01-19T00:01:11Z codesearch9 /usr/bin/confd[4140048]: INFO SRV record set to _etcd-client-ssl._tcp.codesearch.eqiad1.wikimedia.cloud 2026-01-19T00:01:11.631151+00:00 codesearch9 confd[4140048]: 2026-01-19T00:01:11Z codesearch9 /usr/bin/confd[4140048]: FATAL Cannot get nodes from SRV records lookup _etcd-client-ssl._tcp.codesearch.eqiad1.wikimedia.cloud on 172.20.255.1:53: no such host ``` It seems to happen every 10 seconds, which checks out given we have ~8000 entries per day: ``` krinkle@codesearch9:~$ sudo zgrep 'Cannot get nodes from SRV records lookup _etcd-client-ssl._tcp.codesearch.eqiad1.wikimedia.cloud on 172.20.255.1:53: no such host' /var/log/syslog.19.gz | wc -l 8434 krinkle@codesearch9:~$ sudo zgrep 'Cannot get nodes from SRV records lookup _etcd-client-ssl._tcp.codesearch.eqiad1.wikimedia.cloud on 172.20.255.1:53: no such host' /var/log/syslog.18.gz | wc -l 8274 ``` Related: * {T196596} * {T116224}
    • Task
    We (Search Platform and DPE SRE) have discussed the need for maintaining and applying [[ https://docs.opensearch.org/latest/install-and-configure/configuring-opensearch/index-settings/#dynamic-cluster-level-index-settings | OpenSearch cluster dynamic settings ]] in a consistent and visible way in T414095 and T415822. While I closed T415822, thinking that we could do the work with an `initContainer` immediately after cluster bootstrap, the Search Platform use case detailed in T414095 suggests we need something a bit more robust. So far I have been unable to find a ready-made solution for this in the OpenSearch ecosystem. Thus, I believe it is appropriate to start discussing a home-grown solution. I've created [[ https://docs.google.com/document/d/1D3qjRcKFfKKAMZPkJ7BTayP5olV9P5ou-r4VLkUdos4/edit?tab=t.aimkt82jlpgb | this Google doc ]] to flesh out and collaborate on the design. Creating this ticket as an umbrella task for the design/implementation/deployment.
    • Task
    [[ https://wikitech.wikimedia.org/wiki/Kubernetes/Administration#Using_charlie_to_run_helmfile_on_all_services | charlie ]] currently has the root directory `/srv/deployment-charts/helmfile.d/services` as a module-level constant, meaning it's only usable on wikikube. We should make that configurable via command-line flag so that it can be used in other clusters too. As a side effect, that will also mean it can be used from e.g. `/home/elitehacker/my-local-checkouts/deployment-charts/helmfile.d/services`. That's exactly as error-prone, but exactly as powerful in certain situations, as running `helmfile apply` in the same way, so (this being a power tool for power users) we ought to support it.
    • Task
    I've seen the following several times from my catalyst-api process while testing patchdemo locally: ``` [catalyst-api][GIN] 2026/02/13 - 21:55:06 | 200 | 1m59s | 10.244.0.22 | GET "/api/environments/4/logs?stream=mediawiki/install-mediawiki" [catalyst-api]Error #01: write tcp 10.244.0.42:8080->10.244.0.22:58182: write: broken pipe [catalyst-api]Error #02: write tcp 10.244.0.42:8080->10.244.0.22:58182: write: broken pipe [catalyst-api]Error #03: write tcp 10.244.0.42:8080->10.244.0.22:58182: write: broken pipe [catalyst-api]Error #04: write tcp 10.244.0.42:8080->10.244.0.22:58182: write: broken pipe [catalyst-api]Error #05: write tcp 10.244.0.42:8080->10.244.0.22:58182: write: broken pipe [catalyst-api]Error #06: write tcp 10.244.0.42:8080->10.244.0.22:58182: write: broken pipe [catalyst-api]Error #07: write tcp 10.244.0.42:8080->10.244.0.22:58182: write: broken pipe ...hundreds more... [catalyst-api]Error #594: write tcp 10.244.0.42:8080->10.244.0.22:58182: write: broken pipe ```
    • Task
    ### Description Right now, a side panel is being used for the App's table of contents, which is discouraged from using on iOS. We should move to use a Sheet instead, patterned off of Apple Books. ### Requirements [] Update table of contents to use a sheet [] Sheet can be dismissed by close button or swiping down [] Maintain headings & subheadings ### Acceptance Criteria - No custom elements are used - All key functionality of Table of contents is preserved - Proper handling of Italics and special characters in Section titles ### Design References Issue: https://www.figma.com/design/HoyTB8udiGRPQ4jTzhGy4d/Liquid-Glass?node-id=4071-6176&t=2snKf2XnwFFQ8mpv-1 Redesign: https://www.figma.com/design/nXIizaWUKh7yuS52LrTkch/Liquid-Glass-Redesign-version?node-id=6185-46871&t=u28oleSxiMh8tw2V-4 ### Platforms [X] iPhone [X] iPad
    • Task
    **Release version**: 7.9.1 (5957) **Release tag**: releases/n.n.n **Release SHA1**: **App Store Submission Date**: 02/13/2026 **App Store Approval Date**: **App Store Release Date**:
    • Task
    When the editor initially loads and there are suggestions present, those "suggestion-shown" events should be logged; they currently are not. Noticed this from confusing data in the [[ https://superset.wikimedia.org/superset/dashboard/49aebb3f-75ec-4a84-9e54-60c2266183c4/ | Superset dashboard ]] where "suggestion-seen" events were outnumbering "suggestion-shown" events, when this should be impossible.
    • Task
    ###Background The MVP of hybrid search (semantic and lexical) shows lexical results at the top of search, and semantic below. In the current Android app, if you search for a complex query, you often receive no results. With the prototype, sometimes there are situations where we only have semantic results to show users. This is happening for 2/5 of the sample queries in Greek ####Example If we search Πότε ο Πλούτωνας έπαψε να θεωρείται πλανήτης (When did pluto stop being a planet) in our current app experience, we get no results {F72084897} If we tap on the first suggested query Πότε ο Πλούτωνας έπαψε να θεωρείται πλανήτης (When did pluto stop being a planet) on the semantic search onboarding screen, we see: {F72084898} ###Requirements [] PM @JTannerWMF decide what the empty state behavior should be when there are only Semantic results, and no lexical results.
    • Task
    ####Background - Release date: Phased release began Thursday Feb 12, out at 100% on TBD - 15 Days- estimated March 3 ####The Task [] Compare results to baseline data that was collected [] Visualize and present the data in a way that is easily understandable to the team ####Requirements - The data should be based on the metrics in the Epic ####At 15 days - Check metrics from EPIC: {T414222} and compare to baseline - Evaluate performance on hypothesis: If we add additional modules to the activity tab and scale it to all users, we’ll see a 5% increase in overall iOS app account creation compared to baseline
    • Task
    * My username on wikitech.wikimedia.org is: ASanford-WMF * Shell username: `alexsanford` * Original request for production access: T416710 * Email address: asanford@wikimedia.org
    • Task
    Whenever someone makes a typo in the title of a CentralNotice banner (which is pretty frequently, at least for me), it doesn't appear possible to rename without just cloning it again, e.g. here I accidentally used 2025 instead of 2026: https://meta.wikimedia.org/wiki/Special:CentralNoticeBanners/edit/Wikicurious_Austin_Public_Library_March_2025
    • Task
    If an event is postponed for a reason such as extreme weather, the current setup forces you to create an entirely new event page and for all of the participants to re-register, e.g.: https://en.wikipedia.org/wiki/Event:Wikipedia_Day_NYC_2026 https://en.wikipedia.org/wiki/Event:Wikipedia_Day_NYC_March_2026_(Rescheduled_Date)
    • Task
    Documentation at https://www.mediawiki.org/wiki/Content_Transform_Team/Chores - [] Vendor patch for commit `XXXXXXX` for train 1.46.0-wmf.XX (TXXXXXX (train ticket)) -- [] RT-testing started (Friday by Europe EOD) -- [] regression script run -- [] RT-testing logs checked -- [] Vendor+core patch created -- [] Deployment changelog -- [] Vendor patch reviewed -- [] Patches (vendor + core) merged (Monday by US EOD) -- [] Post-merge test edits on [[ https://en.wikipedia.beta.wmcloud.org/wiki/Main_Page | beta ]] - [] Group 0 -- [] logstash checked -- [] Grafana checked -- [] Post-deploy [[ https://www.mediawiki.org/wiki/Special:RecentChanges | recent changes ]] monitored (filter by visual edit) - [] Group 1 -- [] logstash checked -- [] Grafana checked -- [] Post-deploy [[ https://it.wikipedia.org/wiki/Speciale:UltimeModifiche? | recent changes ]] monitored (filter by visual edit) - [] Group 2 -- [] logstash checked -- [] Grafana checked -- [] Post-deploy [[ https://en.wikipedia.org/wiki/Special:RecentChanges | recent changes ]] monitored (filter by visual edit) - [] Post-deploy VisualDiff Run -- [] Visual Diff Run kicked off (args -XXX N) -- [] Visual Diff Run diffs & uploaded to google drive -- [] Visual Diff Run confidence report generated -- [] (optional) diffs processed - [] Update status on [[ https://www.mediawiki.org/wiki/Parsoid/Deployments | deployment changelog ]] to done - [] Monitor [[ https://www.mediawiki.org/wiki/Talk:Parsoid/Parser_Unification/Known_Issues | Parsoid Community-reported issues ]] and [[ https://www.mediawiki.org/wiki/Parsoid/Feedback | Parsoid Feedback ]] (Thursday before triage meeting) - [] (optional) PCS deployment - [] (optional) Wikifeeds deployment - [] Next week's phab created and linked on Slack bookmarks (template: https://www.mediawiki.org/wiki/Content_Transform_Team/Chores/Phabricator_template; link it here)
    • Task
    As part of the Recommender System for patrollers T398071, on this hypothesis we are testing how to create more comprehensive recommendations. **Hypothesis**: If we build an article similarity model we can provide better personalized recommendations to editors based on their topics of interest.
    • Task
    I'm in their LAN now - I think (LAN of norme.iccu.sbn.it). But it seems nobody still shared to me server credentials or IPs. Uhm.
    • Task
    I am using Ubuntu 24.04, Chrome and Media Wiki 1.46.0-alpha (143559b) 09:13, 12. Feb. 2026 mit MathJax rendering. The math code <math>'</math> will not be rendered correctly with MathJax. The result of the math code is empty. {F72081140} The example is taken from https://de.wikipedia.org/w/index.php?title=Lp-Raum&oldid=263159901
    • Task
    The silverpop_daily job runs several child process-control jobs. Yesterday the first child job timed out and sent a failmail but the parent job went on to run the following jobs. ```lang=yaml command: - /usr/bin/run-job --job silverpop_emails_build_export_files - /usr/bin/run-job --job silverpop_emails_upload_data_file ``` Seems like we're trying to bail out with fail_exitcode on the first failed subprocess, but that isn't happening in practice https://phabricator.wikimedia.org/diffusion/WFPC/browse/master/processcontrol/runner.py#L62 Is this only an issue with timeouts where the child job kills its own subprocess? Or does it also affect when the child job's subprocess fails with a non-zero exit code?
    • Task
    NOTE: Prospective mentors should use the [[ https://phabricator.wikimedia.org/maniphest/task/edit/form/1/?title=Outreachy%2032%3A%20%5Badd%20project%20title%5D&description=**Project%20title%3A**%20name%20of%20the%20project%0D%0A**Brief%20summary%3A**%20description%20of%20the%20project%20(2-5%20sentences)%0D%0A**Expected%20outcomes%3A**%20the%20overall%20goal%20of%20the%20project%0D%0A**Skills%20required%2Fpreferred%3A**%20skills%2C%20specific%20technologies%2C%20Phabricator%20project%20tags%0D%0A**Mentors%3A**%20must%20have%20at%20least%202%2C%20include%20Phabricator%20username%20of%20each%20mentor%0D%0A**Rating%3A**%20easy%2C%20medium%2C%20or%20hard%0D%0A**Microtasks%3A**%20links%20to%20easy%20and%20self-contained%20tasks%20on%20Phabricator%20that%20students%20can%20work%20on%20to%20get%20familiar%20with%20the%20project%20and%20technologies%0D%0A**Any%20other%20additional%20information%20for%20contributors%3A**%20communication%20channels%2C%20etc%0D%0ANEW%20QUESTIONS%0D%0A**What%20WMF%20priority%20does%20this%20project%20align%20with%3F**%20A%20Wishlist%20item%3F%20An%20Annual%20Plan%20objective%3F%0D%0A**Why%20are%20you%20proposing%20it%3F**%20What%20needs%20are%20you%20aiming%20to%20meet%3F%20Is%20it%20for%20your%20Wiki%20chapter%2C%20your%20community%2C%20etc%3F%0D%0A**What%20is%20the%20expected%20impact%3F**%20What%20does%20success%20look%20like%3F%20How%20will%20this%20affect%20the%20needs%20you%20have%20identified%3F%0D%0A%0D%0A%3D%3D%3D%3D%3DRecommendation%0D%0AWe%20strongly%20encourage%20mentors%20to%20request%20additional%20specific%20details%20to%20help%20weed%20out%20AI-generated%20applications%20from%20potential%20contributors.%20Consider%20adding%20pre-reqs%20and%20ensure%20that%20you%20communicate%20directly%20with%20contributors%20before%20making%20your%20selection.&projects=developer-outreach%2C%20outreachy-round-31&priority=triage&parent=417438 | template ]] to create a task with your 2026 Outreachy Round 32 project information. It will become a subtask of this one. WARNING: As of 2026, we are changing the project selection to align with the Wikimedia Foundation’s goals and to ensure a positive learning experience for interns. Please read: - We will prioritize projects that align with the **WMF Annual Plan**, specifically the [[ https://meta.wikimedia.org/wiki/Community_Wishlist | Community Wishlist ]] and/or the [[ https://meta.wikimedia.org/wiki/Wikimedia_Foundation_Annual_Plan/2025-2026/Product_%26_Technology_OKRs#Product_and_Technology_Objectives | Product and Technology Objectives ]]. This ensures that we are focusing on projects with community demand and/or strategic alignment. - We want to ensure a **safe and productive** environment for interns. Projects that require extensive community consultation will not be selected. - Mentors can only support **one project**. This means that mentors cannot participate in two Outreachy projects or in both Google Summer of Code and Outreachy at the same time. This ensures that mentors have sufficient capacity for their project and interns get the time and attention they need to do well. - Mentors will first submit their projects to **Phabricator only**. After we have reviewed projects, we notify approved mentors that they are cleared to submit their projects to the Outreachy website. The deadline for this submission is March 7, 2026 at 4pm UTC. This will help make the submission process more efficient. - In order to be accepted, all projects must have at least **two mentors** at the time of submission. This allows us to see which projects are fully staffed. - In order to be accepted, all projects must have microtasks **on Phabricator** at the time of submission. This allows us to see which projects are fully prepared and makes it easier for prospective interns to understand how to start contributing. We acknowledge that this is different from how we approached Outreachy rounds in the past. However, we believe that these changes will improve the program, and we will continue to evaluate the results and iterate as needed. ====This task will collect suggestions via subtasks for project ideas and mentors for the [[ https://www.outreachy.org/ | WMF Outreachy Round 32 ]]. //Please do not suggest projects by commenting on this task itself.// IMPORTANT: Deadline for project proposals is **25 February 2026 at 4pm UTC.** Please review the following resources for more information: - [[ https://www.mediawiki.org/wiki/Outreachy/Participants | Information for Participants ]] - [[ https://www.mediawiki.org/wiki/Outreachy/Mentors | Information for Mentors ]] NOTE: Good projects can be: low-hanging fruit, risky/exploratory, fun or peripheral, core development, infrastructure or automation. ====The [[ https://phabricator.wikimedia.org/maniphest/task/edit/form/1/?title=Outreachy%2032%3A%20%5Badd%20project%20title%5D&description=**Project%20title%3A**%20name%20of%20the%20project%0D%0A**Brief%20summary%3A**%20description%20of%20the%20project%20(2-5%20sentences)%0D%0A**Expected%20outcomes%3A**%20the%20overall%20goal%20of%20the%20project%0D%0A**Skills%20required%2Fpreferred%3A**%20skills%2C%20specific%20technologies%2C%20Phabricator%20project%20tags%0D%0A**Mentors%3A**%20must%20have%20at%20least%202%2C%20include%20Phabricator%20username%20of%20each%20mentor%0D%0A**Rating%3A**%20easy%2C%20medium%2C%20or%20hard%0D%0A**Microtasks%3A**%20links%20to%20easy%20and%20self-contained%20tasks%20on%20Phabricator%20that%20students%20can%20work%20on%20to%20get%20familiar%20with%20the%20project%20and%20technologies%0D%0A**Any%20other%20additional%20information%20for%20contributors%3A**%20communication%20channels%2C%20etc%0D%0ANEW%20QUESTIONS%0D%0A**What%20WMF%20priority%20does%20this%20project%20align%20with%3F**%20A%20Wishlist%20item%3F%20An%20Annual%20Plan%20objective%3F%0D%0A**Why%20are%20you%20proposing%20it%3F**%20What%20needs%20are%20you%20aiming%20to%20meet%3F%20Is%20it%20for%20your%20Wiki%20chapter%2C%20your%20community%2C%20etc%3F%0D%0A**What%20is%20the%20expected%20impact%3F**%20What%20does%20success%20look%20like%3F%20How%20will%20this%20affect%20the%20needs%20you%20have%20identified%3F%0D%0A%0D%0A%3D%3D%3D%3D%3DRecommendation%0D%0AWe%20strongly%20encourage%20mentors%20to%20request%20additional%20specific%20details%20to%20help%20weed%20out%20AI-generated%20applications%20from%20potential%20contributors.%20Consider%20adding%20pre-reqs%20and%20ensure%20that%20you%20communicate%20directly%20with%20contributors%20before%20making%20your%20selection.&projects=developer-outreach%2C%20outreachy-round-31&priority=triage&parent=417438 | task template ]] will include the following information: **Project title:** name of the project **Brief summary:** description of the project (2-5 sentences) **Expected outcomes:** the overall goal of the project **Skills required/preferred:** skills, specific technologies, Phabricator project tags **Mentors:** must have at least 2, include Phabricator username of each mentor **Rating:** easy, medium, or hard **Microtasks:** links to easy and self-contained tasks on Phabricator that students can work on to get familiar with the project and technologies **Any other additional information for contributors:** communication channels, etc NEW QUESTIONS **What WMF priority does this project align with?** A Wishlist item? An Annual Plan objective? **Why are you proposing it?** What needs are you aiming to meet? Is it for your Wiki chapter, your community, etc? **What is the expected impact?** What does success look like? How will this affect the needs you have identified? =====Recommendation We strongly encourage mentors to request additional specific details to help weed out AI-generated applications from potential contributors. Consider adding pre-reqs and ensure that you communicate directly with contributors before making your selection. IMPORTANT: GSoC / Outreachy candidates are required to complete micro-tasks during the application period to prove their ability to work on a three month long project
    • Task
    Attempt to make rebuilds of catalyst-api less painful by leveraging Go's package and build cache.
    • Task
    Research Engineering has been involved in multiple parallel efforts that analyze webrequest traffic (API user segmentation, traffic pattern similarity search tooling, and bot detection). This task is to produce proposal documents that align these efforts around a shared user-type taxonomy, describe the common technical foundations, and outline the potential benefits across related OKRs. The output should clarify what can be reused, what gaps remain, and what decisions are needed to move from exploratory analysis to a sustained capability for understanding and acting on traffic signals. The proposal will build on the learnings and code from the API user segmentation work ([[ https://gitlab.wikimedia.org/kcvelaga/api_user_segmentation/-/tree/main?ref_type=heads | notebooks ]]) to define a generalized taxonomy of user types observable in webrequest traffic. In addition to extending the existing technical analysis, this will require collaboration with qualitative research to shape a taxonomy that is interpretable, stable enough to operationalize, and useful for multiple audiences (research, product, platform). The proposal will connect this taxonomy to the ongoing work to build tooling for SRE to [[ https://docs.google.com/document/d/1cgRHfeFDPRrBX37IXRF-cIQe43dq20TfReFh2msgFNQ/edit?tab=t.0#heading=h.m7gawtigq7jq | search for similar traffic patterns ]] (WE4.3.4), describing how user-type-aware approaches could narrow the scope of the technical problem by enabling different modelling strategies per user type. Finally, the doc will tie this work to [[ https://docs.google.com/document/d/1aKy4g5wPlsDYlNPUgGEVMZ6TO9GwpgqszR3lV0SBycg/edit?tab=t.wi5d5q68yfl1 | bot detection efforts ]] (SDS1.3 and SDS1.5) and the not-yet-prioritized proposal to develop a taxonomy/threat model of automated agents, motivations, and techniques, showing how a unified taxonomy can improve how we express and quantify detection capabilities.
    • Task
    After pushing out the structured logging work and checking in on it, I noticed a lot of duplicate calls to the Adyen ApplePay getpaymentmethods calls possibly being made from the mobile apps, e.g.: ``` Feb 13 16:31:24 payments1006 SmashPig-Braintree: braintree::244863731:244863731.1 | (APITimings) [|braintree|venmo|createsession|request|time] 0.272268s | | Feb 13 16:33:09 payments1005 SmashPig-Adyen: adyen | (APITimings) [|adyen|apple|getpaymentmethods|request|time] 0.351215s | | Feb 13 16:34:14 payments1006 SmashPig-Adyen: adyen | (APITimings) [|adyen|apple|getpaymentmethods|request|time] 0.309871s | | Feb 13 16:36:26 payments1005 SmashPig-Adyen: adyen | (APITimings) [|adyen|apple|getpaymentmethods|request|time] 0.310817s | | Feb 13 16:37:06 payments1006 SmashPig-Adyen: adyen | (APITimings) [|adyen|apple|getpaymentmethods|request|time] 0.325028s | | Feb 13 16:45:51 payments1005 SmashPig-Adyen: adyen | (APITimings) [|adyen|apple|getpaymentmethods|request|time] 0.393592s | | Feb 13 16:45:51 payments1005 SmashPig-Adyen: adyen | (APITimings) [|adyen|apple|getpaymentmethods|request|time] 0.251065s | | Feb 13 16:45:51 payments1005 SmashPig-Adyen: adyen | (APITimings) [|adyen|apple|getpaymentmethods|request|time] 0.308667s | | Feb 13 16:45:51 payments1005 SmashPig-Adyen: adyen | (APITimings) [|adyen|apple|getpaymentmethods|request|time] 0.306557s | | Feb 13 16:45:51 payments1005 SmashPig-Adyen: adyen | (APITimings) [|adyen|apple|getpaymentmethods|request|time] 0.351953s | | Feb 13 16:45:51 payments1005 SmashPig-Adyen: adyen | (APITimings) [|adyen|apple|getpaymentmethods|request|time] 0.342356s | | Feb 13 16:46:25 payments1005 SmashPig-Adyen: adyen | (APITimings) [|adyen|apple|getpaymentmethods|request|time] 0.305105s | | Feb 13 16:47:20 payments1005 SmashPig-Braintree: braintree::244863798:244863798.1 | (APITimings) [|braintree|venmo|createsession|request|time] 0.251093s | | Feb 13 16:48:04 payments1005 SmashPig-Braintree: braintree::244863798:244863798.1 | (APITimings) [|braintree|venmo|createsession|request|time] 0.393999s | | Feb 13 16:49:01 payments1006 SmashPig-Adyen: adyen | (APITimings) [|adyen|apple|getpaymentmethods|request|time] 0.349528s | | Feb 13 16:50:00 payments1006 SmashPig-Adyen: adyen | (APITimings) [|adyen|apple|getpaymentmethods|request|time] 0.305722s | | Feb 13 16:50:00 payments1006 SmashPig-Adyen: adyen | (APITimings) [|adyen|apple|getpaymentmethods|request|time] 0.250033s | | Feb 13 16:50:00 payments1006 SmashPig-Adyen: adyen | (APITimings) [|adyen|apple|getpaymentmethods|request|time] 0.307664s | | Feb 13 16:50:00 payments1006 SmashPig-Adyen: adyen | (APITimings) [|adyen|apple|getpaymentmethods|request|time] 0.314715s | | Feb 13 16:50:12 payments1006 SmashPig-Adyen: adyen | (APITimings) [|adyen|apple|getpaymentmethods|request|time] 0.247764s | | ``` Let's determine whether this is intentional and, if not, identify why it's happening. FYI @Tsevener @Dbrant
    • Task
    We're trying to load a file like `https://civicrm.wikimedia.org/public/persist/contribute/dyn/angular-modules.50b2581b776f771e117ab24cd1ef8059.json`, but getting a 404. This file has changed hash, but presumably the previously loaded page is requesting the old version. This seems to have been going on for a while without much visible impact, but recently @MDemosWMF had an issue where the modal to edit a contact in the batch entry failed to load entirely and we had 404s for the file above logged at that time. I'm able to replicate the 404s by loading the batch entry form, then clearing the caches, then editing a contact - but I'm not able to replicate the failure to load the modal.
    • Task
    I'm using Ubuntu 24.04, Chrome and Wikimedia 1.46.0-alpha (83df57f) 08:35, 13. Feb. 2026 mit MathJax rendering. The rendering of the formular <math>f'(x) = \frac{\mathrm d}{\mathrm dx} f(x) = \lim \limits_{\Delta x \to 0} \left( \frac{\Delta f(x)}{\Delta x} \right)</math> is wrong with MathJax rendering. I am not sure there the problem is in detail. After removing f'(x) = from the formular the error do not occure. {F72070620} This was found at https://de.wikipedia.org/w/index.php?title=Hilfe:TeX&oldid=264108844
    • Task
    Currently, running Wikibase's PHPUnit tests locally results in the following PHP warning being displayed: ``` $ composer phpunit -- extensions/Wikibase --verbose > Composer\Config::disableProcessTimeout Using PHP 8.5.2 Running with MediaWiki settings because there might be integration tests PHP Warning: Invalid substructure diff for key links: Diff\DiffOp\DiffOpChange [Called from Wikibase\DataModel\Services\Diff\EntityDiff::fixSubstructureDiff in /[...]/extensions/Wikibase/lib/packages/wikibase/data-model-services/src/Diff/EntityDiff.php at line 64] in /[...]/includes/Debug/MWDebug.php on line 486 ``` This is also being seen in Wikibase CI (e.g. at https://integration.wikimedia.org/ci/job/quibble-vendor-mysql-php83/38020/consoleFull#console-section-14): ``` 15:55:37 > MediaWiki\Composer\PhpUnitSplitter\PhpUnitXmlManager::listTestsNotice 15:55:37 15:55:37 Running `phpunit --list-tests-xml` to get a list of expected tests ... 15:55:37 15:55:37 > Composer\Config::disableProcessTimeout 15:55:38 Using PHP 8.3.30 15:55:38 Running with MediaWiki settings because there might be integration tests 15:55:44 PHP Warning: Invalid substructure diff for key links: Diff\DiffOp\DiffOpChange [Called from Wikibase\DataModel\Services\Diff\EntityDiff::fixSubstructureDiff in /workspace/src/extensions/Wikibase/lib/packages/wikibase/data-model-services/src/Diff/EntityDiff.php at line 64] in /workspace/src/includes/Debug/MWDebug.php on line 486 ``` `git bisect` locally says that this has started happening since <https://gerrit.wikimedia.org/r/c/mediawiki/extensions/Wikibase/+/1237330>; cc @umherirrender @jdforrester-wmf fyi
    • Task
    Acoustic can't url-encode the email address for us, so the result is the `+` in the url is interpreted as a space (any `+` after the `?` is considered a space). So far our only potential solution is to send the url-encoded email up to Acoustic as a separate field, but we can't do this in SQL. We could just replace the `+` in SQL, but that seems like a pretty weak solution.
    • Task
    This task will track the racking, setup, and OS installation of druid-internal100[1-6] == Hostname / Racking / Installation Details == **Hostnames:** `druid-internal100[1-6]` **Racking Proposal:** Distributed across eqiad rows A-F **Networking Setup:** # of Connections:**1**/2 - Speed:10G. - VLAN: **analytics** **OS Distro:** Trixie **Boot Method:** UEFI **Sub-team Technical Contact:** @BTullis or @elukey == Per host setup checklist == Each host should have its own setup checklist copied and pasted into the list below. ==== druid-internal1001 [] Receive in system on #procurement task T413446 & in Coupa [] Rack system with proposed racking plan (see above) & update Netbox (include all system info plus location, state of planned) [] Run the [[ https://netbox.wikimedia.org/extras/scripts/provision_server.ProvisionServerNetwork/ | Provision a server's network attributes ]] Netbox script - Note that you must run the DNS and Provision cookbook after completing this step [] **Immediately** run the `sre.dns.netbox` cookbook [] **Immediately** run the `sre.hosts.provision` cookbook [] Run the `sre.hardware.upgrade-firmware` cookbook [] Update the `operations/puppet` repo - this should include updates to preseed.yaml, and site.pp with roles defined by service group: https://wikitech.wikimedia.org/wiki/SRE/Dc-operations [] Run the `sre.hosts.reimage` cookbook ==== druid-internal1002 [] Receive in system on #procurement task T413446 & in Coupa [] Rack system with proposed racking plan (see above) & update Netbox (include all system info plus location, state of planned) [] Run the [[ https://netbox.wikimedia.org/extras/scripts/provision_server.ProvisionServerNetwork/ | Provision a server's network attributes ]] Netbox script - Note that you must run the DNS and Provision cookbook after completing this step [] **Immediately** run the `sre.dns.netbox` cookbook [] **Immediately** run the `sre.hosts.provision` cookbook [] Run the `sre.hardware.upgrade-firmware` cookbook [] Update the `operations/puppet` repo - this should include updates to preseed.yaml, and site.pp with roles defined by service group: https://wikitech.wikimedia.org/wiki/SRE/Dc-operations [] Run the `sre.hosts.reimage` cookbook ==== druid-internal1003 [] Receive in system on #procurement task T413446 & in Coupa [] Rack system with proposed racking plan (see above) & update Netbox (include all system info plus location, state of planned) [] Run the [[ https://netbox.wikimedia.org/extras/scripts/provision_server.ProvisionServerNetwork/ | Provision a server's network attributes ]] Netbox script - Note that you must run the DNS and Provision cookbook after completing this step [] **Immediately** run the `sre.dns.netbox` cookbook [] **Immediately** run the `sre.hosts.provision` cookbook [] Run the `sre.hardware.upgrade-firmware` cookbook [] Update the `operations/puppet` repo - this should include updates to preseed.yaml, and site.pp with roles defined by service group: https://wikitech.wikimedia.org/wiki/SRE/Dc-operations [] Run the `sre.hosts.reimage` cookbook ==== druid-internal1004 [] Receive in system on #procurement task T413446 & in Coupa [] Rack system with proposed racking plan (see above) & update Netbox (include all system info plus location, state of planned) [] Run the [[ https://netbox.wikimedia.org/extras/scripts/provision_server.ProvisionServerNetwork/ | Provision a server's network attributes ]] Netbox script - Note that you must run the DNS and Provision cookbook after completing this step [] **Immediately** run the `sre.dns.netbox` cookbook [] **Immediately** run the `sre.hosts.provision` cookbook [] Run the `sre.hardware.upgrade-firmware` cookbook [] Update the `operations/puppet` repo - this should include updates to preseed.yaml, and site.pp with roles defined by service group: https://wikitech.wikimedia.org/wiki/SRE/Dc-operations [] Run the `sre.hosts.reimage` cookbook ==== druid-internal1005 [] Receive in system on #procurement task T413446 & in Coupa [] Rack system with proposed racking plan (see above) & update Netbox (include all system info plus location, state of planned) [] Run the [[ https://netbox.wikimedia.org/extras/scripts/provision_server.ProvisionServerNetwork/ | Provision a server's network attributes ]] Netbox script - Note that you must run the DNS and Provision cookbook after completing this step [] **Immediately** run the `sre.dns.netbox` cookbook [] **Immediately** run the `sre.hosts.provision` cookbook [] Run the `sre.hardware.upgrade-firmware` cookbook [] Update the `operations/puppet` repo - this should include updates to preseed.yaml, and site.pp with roles defined by service group: https://wikitech.wikimedia.org/wiki/SRE/Dc-operations [] Run the `sre.hosts.reimage` cookbook ==== druid-internal1006 [] Receive in system on #procurement task T413446 & in Coupa [] Rack system with proposed racking plan (see above) & update Netbox (include all system info plus location, state of planned) [] Run the [[ https://netbox.wikimedia.org/extras/scripts/provision_server.ProvisionServerNetwork/ | Provision a server's network attributes ]] Netbox script - Note that you must run the DNS and Provision cookbook after completing this step [] **Immediately** run the `sre.dns.netbox` cookbook [] **Immediately** run the `sre.hosts.provision` cookbook [] Run the `sre.hardware.upgrade-firmware` cookbook [] Update the `operations/puppet` repo - this should include updates to preseed.yaml, and site.pp with roles defined by service group: https://wikitech.wikimedia.org/wiki/SRE/Dc-operations [] Run the `sre.hosts.reimage` cookbook
    • Task
    In T405422, we **started** the Paste Check A/B experiment. In this task, we will **stop** the Paste Check A/B experiment. === Timing !!**TBD**!!, pending T414755. === Requirements - Turn off the Paste Check experiment we enabled in T405422 === Done - [ ] Editing Engineering deploys config change to //stop// the Paste Check A/B experiment
    • Task
    During the decom of sretest1002 I ran into the following traceback: ``` Delete IP 10.64.185.3/24 on eno1 Delete IP 2620:0:861:13f:10:64:185:3/64 on eno1 Unset DNS name for IP 10.65.3.101/16 on mgmt [Netbox] Set status to Decommissioning, deleted all non-mgmt IPs, updated switch interfaces (disabled, removed vlans, etc) Host steps raised exception Traceback (most recent call last): File "/srv/deployment/spicerack/cookbooks/sre/hosts/decommission.py", line 400, in run self._decommission_host(fqdn) File "/srv/deployment/spicerack/cookbooks/sre/hosts/decommission.py", line 338, in _decommission_host configure_switch_interfaces(self.remote, netbox, netbox_data, self.spicerack.verbose) File "/srv/deployment/spicerack/cookbooks/sre/network/__init__.py", line 51, in configure_switch_interfaces live_interface = get_junos_live_interface_config(remote_host, nb_switch_interface.name, print_output) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/srv/deployment/spicerack/cookbooks/sre/network/__init__.py", line 287, in get_junos_live_interface_config results_raw = remote_host.run_sync(f"show configuration interfaces {interface} | display json", ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/spicerack/remote.py", line 556, in run_sync return self._execute( ^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/spicerack/remote.py", line 764, in _execute raise RemoteExecutionError(ret, "Cumin execution failed", worker.get_results()) spicerack.remote.RemoteExecutionError: Cumin execution failed (exit_code=2) **Host steps raised exception**: Cumin execution failed (exit_code=2) ``` @elukey tracked down the root cause: ``` <elukey> I think the issue is this <elukey> A:lsw1-d6-eqiad# show configuration interfaces ethernet-1/6 | display json <elukey> Parsing error: Unknown token 'configuration'. Options are ['#', '/', '>', '>>', 'acl', 'arpnd', 'interface', 'lag', 'network-instance', 'platform', 'system', 'tunnel', 'tunnel-interface', 'version', '|'] <elukey> ah yeah is that a nokia switch? <elukey> yep.. Ok so we are trying to use junos commands on nokia, this is why it fails ```
    • Task
    [] Make a script that gets everything set up in a simple way. [] Make it work on MacOS
    • Task
    Project Name: `catalyst-dev` Type of quota increase requested: CPU, RAM, and volume storage Amount to increase: VCPU: +16 RAM: +32GB Volume storage: +450GB Reason: In T407733, resources were increase for the `catalyst` project to allow for creation of another worker node. Now we need matching resources in the `catalyst-dev` project so that we can bring up another worker node there too.
    • Task
    e.g. https://www.wikidata.org/wiki/Special:Log?type=liquidthreads side effect of {T89426} / rEWME1e6e9425a9743920a6fe44f529cddcd23ce275d9.
    • Task
    Implement Snackbar component. https://m3.material.io/components/snackbar/overview Reference implementation: https://www.mdui.org/en/docs/2/components/snackbar
    • Task
    Use a custom Tooltip component instead of Codex's tooltips for greater UI consistency. https://m3.material.io/components/tooltips/overview
    • Task
    #growthexperiments-mentorship needs to maintain a list of mentors. Currently, this is done via CommunityConfiguration, specifically, using the `GrowthMentorList` provider. Data validity is ensured by custom-written validator (originally from times when no Community Configuration existed and when a similar feature existed internally within GrowthExperiments). Ideally, we should use the default JSON-based validator. This will allow us to do several improvements: * remove last bits of CC1.0 code from GrowthExperiments, * ensure defaults-provisioning code works as intended (see T417417 for more details), * in the long term, be easier to maintain than a custom-written validator. @Urbanecm_WMF [previously attempted](https://gerrit.wikimedia.org/r/c/mediawiki/extensions/GrowthExperiments/+/1052143) to do this as part of {T367575}, but unsuccessfully.
    • Task
    I'm using Ubuntu 24.04, Chrome and Wikimedia 1.46.0-alpha (83df57f) 08:35, 13. Feb. 2026 mit MathJax rendering. The code <math>\not=</math>, <math>\not<</math> or <math>\not></math> will not be rendered correctly mit MathJax and native MathMl rendering. {F72062841} This bug is connected to T402082.
    • Task
    (the wikitext below is essentially the exact wikitext I used on both test wikis, minus the page names; I performed both tests using a root page of `Template:Inclusion control test` so that is now unusable) **Steps to replicate the issue** (include links if applicable): * Create a test template with the content ``` {{big|{{strong|Copyright attribution: all descriptions of inclusion control tags are based on [[w:Special:Permalink/1337167814]].}}}} This is a test of whether other inclusion control tags function '''inside''' <code><nowiki><onlyinclude></onlyinclude></nowiki></code> tags. {{cot|Long-winded explanation}} {{slink|w:Help:Template#Inclusion control: noinclude, includeonly, and onlyinclude}} says that the <code><nowiki><onlyinclude></onlyinclude></nowiki></code> tags create a situation where "nothing on the page except what appears between the tags is included when the template is called". See the reference beside this sentence for a description of the other inclusion control tags.<ref><code><nowiki><noinclude></noinclude></nowiki></code> tags cause the wikitext between to be processed when the source page is viewed or saved, but not upon transclusion/substitution of the source page. <code><nowiki><includeonly></includeonly></nowiki></code> tags, on the other hand, cause the wikitext to be processed only upon transclusion/substitution and not when the source page is viewed or saved.</ref> As evidence, <code><nowiki><includeonly>includeonly text</includeonly><onlyinclude>onlyinclude text</onlyinclude><noinclude>noinclude text</noinclude></nowiki></code> is provided. Notice that "onlyinclude text" and "noinclude text" appear on the source page; however, only "onlyinclude text" appears during transclusion/substitution. "includeonly text" never appears throughout the process as it is overridden by the <code><nowiki><onlyinclude></onlyinclude></nowiki></code> tags. {{cob}} The idea is that <code><nowiki><noinclude></noinclude></nowiki></code> and <code><nowiki><includeonly></includeonly></nowiki></code> might work ''within'' <code><nowiki><onlyinclude></onlyinclude></nowiki></code> tags. To test this, I will use the wikitext <code><nowiki><onlyinclude>This is<noinclude> the source page of</noinclude> a [[(**full page name of your choice**)|test template]]<includeonly> {{safesubst:<noinclude />ifsubst|that has been substituted|being transcluded}}</includeonly>.</onlyinclude></nowiki></code>; below is the wikitext in action. <onlyinclude>This is<noinclude> the source page of</noinclude> a [[(full page name)|test template]]<includeonly> {{safesubst:<noinclude />ifsubst|that has been substituted|being transcluded}}</includeonly>.</onlyinclude> I will [[w:Help:Transclusion|transclude]] and [[w:Help:Substitution|substitute]] this template on the subpage [[/test subpage]] to see whether this trick functions. == Notes == {{reflist}} ``` * Click on the `[[/test subpage]]` redlink and create it with the following content: ``` See the [[(full name of root page)|root page]] for why this subpage exists. A reminder: the wikitext processed here is <code><nowiki><onlyinclude>This is<noinclude> the source page of</noinclude> a [[(full name of root page)|test template]]<includeonly> {{safesubst:<noinclude />ifsubst|that has been substituted|being transcluded}}</includeonly>.</onlyinclude></nowiki></code>. Transclusion: {{(name of root page)}} Substitution: {{subst:(name of root page)}} ``` **What happens?**: On the subpage, the transclusion shows "onlyinclude textThis is a test template being transcluded.This is a test template being transcluded." when read The substitution shows "onlyinclude textThis is a test template that has been substituted.This is a test template that has been substituted." **What should have happened instead?**: The transclusion should have shown "This is a test template being transcluded." The substitution should have shown "This is a test template that has been substituted." **Software version** (on `Special:Version` page; skip for WMF-hosted wikis like Wikipedia): **Other information** (browser name/version, screenshots, etc.): Browser: Google Chrome, Version 144.0.7559.133 (Official Build) (arm64)
    • Task
    If `MediaWiki:GrowthMentors.json` page is not existing, GrowthExperiments is logging the following logs: ``` Time level channel host wiki message Feb 13, 2026 @ 12:46:24.634 ERROR GrowthExperiments mw-web.eqiad.main-5c9fc7f974-ftfsw tlywiki Key <code>Mentors</code> is missing Feb 13, 2026 @ 12:17:28.423 ERROR GrowthExperiments growthexperiments-updatementeedata-s3-29516415-v6fgg nawiki Key <code>Mentors</code> is missing Feb 13, 2026 @ 08:39:06.582 ERROR GrowthExperiments mw-web.eqiad.main-5c9fc7f974-lbcwr kuswiki Key <code>Mentors</code> is missing Feb 13, 2026 @ 06:43:43.568 ERROR GrowthExperiments mw-web.eqiad.main-5c9fc7f974-rrqg8 bbcwiki Key <code>Mentors</code> is missing Feb 13, 2026 @ 05:54:27.149 ERROR GrowthExperiments mw-web.eqiad.main-5c9fc7f974-w2x74 guwwiki Key <code>Mentors</code> is missing Feb 13, 2026 @ 05:17:17.480 ERROR GrowthExperiments mw-web.codfw.main-74cbb9fbd5-xxwcl pcmwiki Key <code>Mentors</code> is missing ``` Those errors are relatively insignificant, because the wiki will behave just as if it had no mentors, which is expected. However, given the log is logged as an `ERROR`, it can be confusing to log readers. Fundamentally, this is happening because the mentor list is (still) not validated using native CC2.0 validators (which can provision default values if page is not existing). In the meantime, we can solve this by declaring a custom CommunityConfiguration provider, which would provide the default value if page is missing. This task should be done together with {T304052}, because having more wikis with GrowthExperiments means those logs will be much more frequent than before.
    • Task
    Hi, This was briefly discussed in slack; We have a [traffic replay harness](https://gitlab.wikimedia.org/repos/wikidata-platform/queryhammer/-/merge_requests/3) currently tested on stats hosts (under my account) that we would like to deploy on dse k8s. This would allow us to replay query logs from kafka to test eqiad nodes. We need this to measure perf and track query failures before we expose endpoints to actual traffic. Could you help with that? We would like to deployments for main and scholarly endpoints respectively. Resource wise, we can start to the smallest size available. There still work for us to do (pending MR, provide a docker image) before being ready to deploy, but I wanted to start the conversation here on phab. Happy to discuss further.
    • Task
    In order to run WDQS traffic replay on k8s, we need the [[ https://gitlab.wikimedia.org/repos/wikidata-platform/queryhammer. | project ]] to implement CI to build and deploy a docker image. Docs - https://wikitech.wikimedia.org/wiki/PipelineLib/Guides/How_to_define_a_golang_test_pipeline - https://www.mediawiki.org/wiki/GitLab/Workflows/Deploying_services_to_production AC - [] A blubber pipeline for building a golang (latest) docker image - [] Test run with data race detector enabled (`go test -race ./...`) - [] A gitlab CI pipeline to run tests automation and publish a docker image to Wikimedia's registry.
    • Task
    In order to add datatypes incrementally, we added some logic to deal with unsupported datatypes. Once all datatypes are supported, this can be removed. Blocked on the completion of these tickets: - {T407248} - {T414416} - {T409453} - {T409454} - {T405730} - {T405731} - {T417041} - {T417042} - {T417043} - {T412128} === Acceptance Criteria - [ ] remove `WBUI2025_UNSUPPORTED_DATATYPES` from `view/src/Wbui2025FeatureFlag.php` - [ ] remove `getSupportedDataTypes` function and all its uses - [ ] remove mocks for `resources/wikibase.wbui2025/supportedDatatypes.json` in all the tests that use this - [ ] clean up vue components that use `supportedDatatypes` (`propertyLookup`, `statementGroupView`, and possibly others) - [ ] clean up `view/src/VueNoScriptRendering.php` - [ ] look for and clean up anything related to unsupported datatypes that is not captured by this list
    • Task
    This is a draft task for now. See a very similar task for Wikibase: {T287582}.
    • Task
    {F72057479 size=full} There's different amounts of spacing between Password and Passkeys (and similar Two-factor and Recovery) vs Passkeys and Two-factor
    • Task
    The English Wikipedia Arbitration Committee's email address is arbcom-en@wikimedia.org. However, it's likely some people attempting to contact us are emailing arbcom-en@wikipedia.org instead. Automatically forwarding these emails would be very useful, and it probably makes sense to do so for the whole domain to help confused users attempting to contact other Wikimedians.
    • Task
    Quibble takes four seconds to output version numbers... ``` quibble.commands:>>> Start: Versions quibble.commands:Python version: 3.9.2 (default, Jan 25 2026, 13:37:52) [GCC 10.2.1 20210110] quibble.commands:chromedriver --version: ChromeDriver 120.0.6099.224 (3587067cafd6f5b1e567380acb485d96e623ef39-refs/branch-heads/6099@{#1761}) quibble.commands:chromium --version: Chromium 120.0.6099.224 built on Debian 11.8, running on Debian 11.11 quibble.commands:composer --version: Composer version 2.9.1 2025-11-13 16:10:38 quibble.commands:PHP version 8.3.30 (/usr/bin/php8.3) quibble.commands:Run the "diagnose" command to get more detailed diagnostics output. quibble.commands:mysql --version: mysql Ver 15.1 Distrib 10.5.29-MariaDB, for debian-linux-gnu (x86_64) using EditLine wrapper quibble.commands:psql --version: psql (PostgreSQL) 13.23 (Debian 13.23-0+deb11u1) quibble.commands:node --version: v20.19.5 quibble.commands:npm --version: 10.8.2 quibble.commands:php --version: PHP 8.3.30 (cli) (built: Jan 20 2026 19:35:57) (NTS) quibble.commands:Copyright (c) The PHP Group quibble.commands:Zend Engine v4.3.30, Copyright (c) Zend Technologies quibble.commands: with Zend OPcache v8.3.30, Copyright (c), by Zend Technologies quibble.commands:<<< Finish: **Versions, in 4.019 s** ``` Which is because we have 8 commands to check and they are processed serially by `ReportVersions`...
    • Task
    ====Background Currently, if the user goes to the special:reading list page link they get the following error which is not user friendly - it does not clearly indicate what went wrong. {F72055188} The error can occur in two of the following cases: - The user is not logged into their account - The user opened a link to someone else's reading list page ====Requirement Change the above error message to the following to cover both of the cases: {F72055385}
    • Task
    We've had the ability to run kafka mirrormaker v1 on Kubernetes since https://phabricator.wikimedia.org/T304373. We're currently running in a state in which one MM1 instance runs on Kubernetes ``` brouberol@deploy2002:~$ kube-env kafka-mirrormaker dse-k8s-eqiad brouberol@deploy2002:~$ k get pod NAME READY STATUS RESTARTS AGE kafka-mirrormaker-logging-eqiad-to-jumbo-eqiad-7f75b974c6-rm2bw 1/1 Running 0 24h ``` and the other instances run alongside the brokers, on the various kafka clusters. We should stop running these MM1 instances directly on the broker hosts themselves, as it will make the kafka upgrade plan easier. We should: - agree on which k8s cluster we'd like to run all MM1 instances - migrate all remaining MM1 instances to this k8s cluster
    • Task
    # The problem [[ https://integration.wikimedia.org/ci/job/quibble-with-gated-extensions-selenium-php83/ | quibble-with-gated-extensions-selenium-php83 ]] Jenkins job is by far the slowest job in mediawiki/core test pipeline. # Acceptance criteria [] `quibble-with-gated-extensions-selenium-php83` is no longer the the slowest job in mediawiki/core test pipeline # The solution Improvements have to be made in three places - core - gated extensions/skins - CI # Data This is not a new problem. Unfortunately, it seems to be (slowly) becoming worse ([[ https://releng-data.wmcloud.org/jobs?sql=WITH+ordered_times+AS+%28%0D%0A++SELECT%0D%0A++++job_id%2C%0D%0A++++strftime%28%27%25Y-%25m%27%2C+timestamp+%2F+1000%2C+%27unixepoch%27%29+AS+month%2C%0D%0A++++time%2C%0D%0A++++ROW_NUMBER%28%29+OVER+%28%0D%0A++++++PARTITION+BY+job_id%2C%0D%0A++++++strftime%28%27%25Y-%25m%27%2C+timestamp+%2F+1000%2C+%27unixepoch%27%29%0D%0A++++++ORDER+BY%0D%0A++++++++time%0D%0A++++%29+AS+rn%2C%0D%0A++++COUNT%28*%29+OVER+%28%0D%0A++++++PARTITION+BY+job_id%2C%0D%0A++++++strftime%28%27%25Y-%25m%27%2C+timestamp+%2F+1000%2C+%27unixepoch%27%29%0D%0A++++%29+AS+cnt%0D%0A++FROM%0D%0A++++builds%0D%0A++WHERE%0D%0A++++job_id+IN+%28221%2C+751%29%0D%0A%29%0D%0ASELECT%0D%0A++job_id%2C%0D%0A++month%2C%0D%0A++COUNT%28*%29+AS+jobs_per_month%2C%0D%0A++ROUND%28SUM%28time%29+%2F+1000.0+%2F+3600.0%2C+2%29+AS+total_time_hours%2C%0D%0A++ROUND%28%0D%0A++++AVG%28%0D%0A++++++CASE%0D%0A++++++++WHEN+cnt+%25+2+%3D+1%0D%0A++++++++AND+rn+%3D+%28cnt+%2B+1%29+%2F+2+THEN+time%0D%0A++++++++WHEN+cnt+%25+2+%3D+0%0D%0A++++++++AND+%28%0D%0A++++++++++rn+%3D+cnt+%2F+2%0D%0A++++++++++OR+rn+%3D+cnt+%2F+2+%2B+1%0D%0A++++++++%29+THEN+time%0D%0A++++++END%0D%0A++++%29+%2F+1000.0+%2F+60.0%2C%0D%0A++++1%0D%0A++%29+AS+median_time_minutes%0D%0AFROM%0D%0A++ordered_times%0D%0AGROUP+BY%0D%0A++job_id%2C%0D%0A++month%0D%0AORDER+BY%0D%0A++job_id%2C%0D%0A++month%3B#g.mark=bar&g.x_column=month&g.x_type=ordinal&g.y_column=median_time_minutes&g.y_type=quantitative | source ]]). | job_id | month | jobs_per_month | total_time_hours | median_time_minutes | | ------ | ------- | -------------- | ---------------- | ------------------- | | 221 | 2025-10 | 3553 | 1020.22 | 20.1 | | 221 | 2025-11 | 3537 | 1069.64 | 21.9 | | 751 | 2025-12 | 1574 | 434.89 | 20.9 | | 751 | 2026-01 | 1254 | 380.13 | 22.0 | | 751 | 2026-02 | 1700 | 549.86 | 23.3 | {F72056113} Slowest jobs in mediawiki/core test pipeline, sorted by median. | Job | Median (mm:ss) | | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | | [quibble-with-gated-extensions-selenium-php83](https://releng-data.wmcloud.org/-/dashboards/jenkins/slow-jobs?repo_name=mediawiki%2Fcore&date_end=&host_name=&job_name=quibble-with-gated-extensions-selenium-php83&date_start=2025-10-02) | 21:28 | | [quibble-for-mediawiki-core-vendor-mysql-php83](https://releng-data.wmcloud.org/-/dashboards/jenkins/slow-jobs?repo_name=mediawiki%2Fcore&date_end=&host_name=&job_name=quibble-for-mediawiki-core-vendor-mysql-php83&date_start=2025-10-02) | 11:31 | | [quibble-with-gated-extensions-vendor-mysql-php83](https://releng-data.wmcloud.org/-/dashboards/jenkins/slow-jobs?repo_name=mediawiki%2Fcore&date_end=&host_name=&job_name=quibble-with-gated-extensions-vendor-mysql-php83&date_start=2025-10-02) | 10:54 | # Related tasks ## selenium/webdriverio - In {T408361} (and subtasks) we are working on speeding up selenium/webdriverio tests. - In {T415574} (and subtasks) we are working on being able to run selenium/webdriverio and cypress tests locally, so we could debug them and speed them up. ## CI - In {T417416} (and subtasks) we are working on speeding up CI. - {T287582} - {T417412} ## gated extensions - {T417421} ## cypress - {T417418} # Timeline ## 2019 - {T225730} - {T226869} ## 2024 - {T381895} ## 2026 - {T415553} - Data on how long each step of `quibble-with-gated-extensions-selenium-php83` takes. Two slow steps are GrowthExperiments and Wikibase tests.
    • Task
    ====Background This ticket is to build quick survey in order to get understand user satisfaction with the reading list feature. ====Requirements To be finalized: - Criteria: A user who has saved at least (X) article to their list (should we include people who saved and then removed an article?) - Survey launch: The user visits their saved articles page and a quick survey with 1 question appears -- "Are you enjoying this feature?" - Dismissal: - If a user answers the survey, we log their answer, and they never see this question again. - If the user ignores the survey and navigates elsewhere, or if they close the question without answering, we will prompt them for a total number of X times (in future visits), and then never show it again. ====Designs TBD based on T417403
    • Task
    In the current implementation of ReadingLists on web, we query the database to determine: - if a page is in the user's reading list, so that the "save page" button has the "save" or "unsave" icon. - if it is in the reading list, then get the reading list entry id to provide to the JS - also get the reading list size for metrics. It is problematic for scaling reading lists to query the reading_list database tables on x1 on every page view, and it is also unnecessary. We need to determine if we still need the reading list size metric here, and if so, consider other approaches. For the reading list entry id, there probably is a way to use the page id for this instead. For determining if the page in in the reading list, Amir has a suggestion: > Build a bloom filter of existing reading list page ids for each user and put it in user_properties backed by some cache. Bloom filter will take away 99%‌of the load and even if it incorrectly say "this article is in the user's reading list", then you can query x1 to actually be sure but again it won't cause any load issues. You can also put that behind memcached to make everything faster and avoid local db query too. This is the idea that I wanted to implement for many years to remove the query of watchlist table on every logged-in page view. If you can implement it for watchlist too, to improve performance (since it'll be backed by memcached). It would be even better! A less efficient approach could be just to have a list of page ids that are on the user's reading list and put it in memcached and check against that, vs a database query.
    • Task
    ====Background During the Reading List beta phase we would like to get feedback from Readers on the feature. We are planning to use Quick Survey to get quick feedback from the readers which questions as shown in the mocks below. This ticket is to explore the extent to which we can modify quick survey look and feel to make it feel more embedded on the page. ====Design Desktop | {F72052645} | {F72052615} | Mobile | {F72052709} | {F72052728} | {F72052735} =====[[ https://www.figma.com/design/Q3pYKI7RRMdRRgkQp2RiTf/Web--Reading-List-Collections?node-id=2271-108651&t=hSGhLGv8UyrQ3ZE3-4 | Link to figma ]] Questions: - Are we able to modify quick survey to what's shown above - Do quick survey's show up on mobile? - Can we make sure to not show the survey if the user has taken the survey on another platform for e.g. if taken on Desktop don't show on Mobile. ====Fall back design - We can fall back to the standard quick survey components on the right side of the page on Desktop with the same questions. - How does standard quick survey show up on mobile?
    • Task
    Currently all code related to the WDQS resides in one big git repository (https://gerrit.wikimedia.org/r/q/project:wikidata/query/rdf). As part of the migration to a new triple store backend several components in this repo need to be rewritten, ported or even dropped. Going forward this repository is supposed to be split into several new repositories with the old one being archived. This split will most likely not happen in one go but in several steps, each migrating one component to a new repository. This task is intended to keep track of this process. AC: [ ] A decision has been taken regarding the home of new repositories (GitHub vs. Gerrit) [ ] All components (which are not deprecated after the backend transition) have been migrated to a new repo [ ] The original repo has been archived
    • Task
    **Goal**: One place for “language” (preferences, code↔zid, fallback, and ensuring/fetching). Library stays the generic cache for ZObjects and other auxiliary data. **Move into languages.js:** - State: languages: {}, languageCodePromises: {} - Getter: getLanguageZidOfCode (read state.languages and wgWikiLambdaLangs; pure) - Actions: setLanguageCode, fetchLanguageCode, ensureFallbackLanguageZids, ensureLanguageCodes **Keep in library.js:** Getter: getLanguageIsoCodeOfZLang (reads this.getStoredObject(zid) / ZObject storage) Follow up of: https://phabricator.wikimedia.org/T411703
    • Task
    Some discussion about loading all the languages objects on the server. Mapping [code]-> [zid] Discussion ongoing whether we should keep it at all and that we can’t use this in abstract mode. The server-side languages map is currently responsible for: 1. **ZMultilingualStringDialog**: use server-side language mapping to get language labels This is needed to comply with Wikidata language lists where a literal language is used instead of a ZID. Now we can render a list with human readable language labels instead of ISO codes. 2. Make sure the **fallbackLanguage** chain is resolved to ZIDs before loading the app. Geno mentioned we might not need #1 for Abstract, but we do need #2 for Abstract because I am using the fallback languages to determine which language to load in the second preview block. Also the cdx-lookup does not like it if it gets initialized with english and then a render later it becomes the zid of dutch. This causes bugs in the lookup where the suggestions don’t show when clearing the input. This can be hacked around using a :key prop with the language. Server-side Code from ZobjectContentHandler.php: ``` // Add language mapping for multilingual string dialog $parserOutput->setJsConfigVar( 'wgWikiLambdaLangs', $this->zObjectStore->fetchAllZLanguageObjects() ); ``` **Implementation details:** - getLanguageZidOfCode (library.js) only checks state.languages and wgWikiLambdaLangs. Codes not in either return undefined. - getFallbackLanguageZids (languages.js) maps mw.language.getFallbackLanguageChain() through getLanguageZidOfCode and filters to truthy. So codes that were never fetched yield missing ZIDs (empty or partial list). - Consumers that need those ZIDs: - getDefaultPreviewLanguageZids in abstractWiki.js — used in initializeAbstractWikiContent to set previewLanguageZids; needs fallback ZIDs to be available at init time. (new code WIP) - ZMultilingualString.vue — allViewItems (line ~160), initializeMultilingualStringList (getFallbackLanguageZids), and watch(langs) (lines ~424–425) all rely on getLanguageZidOfCode; ZIDs must eventually be available so UI shows ZIDs and priority works. (now handled by server-side language map) **Alternative solution implemented currently** In https://phabricator.wikimedia.org/T411703 This issue is addressed by adding `store.ensureLanguageCodes( { codes } );`. It fetches the zids for the language codes and then runs `fetchZids` to fetch the objects of those language zids to ensure label data. This seems to work well for Abstract and for the ZMultilingualString we can do this aswell and therefor the server-map does not seem to be neccessary anymore. Should we remove it and make @Jdforrester-WMF happy?
    • Task
    Quibble wraps commands with a chronometer which outputs the time it took for the command upon completion: ``` INFO:quibble.commands:>>> Start: npm install in /workspace/src <<< Finish: npm install in /workspace/src, in 20.659 s ``` Those sections are parsed by the Jenkins collapsible section to show a sidebar: {F72047986 size=full} It would be nice to have all those durations collected and reported at the end of the execution. I have built a proof of concept emitting: ``` [ REPORT FOR COMMAND DURATIONS ] ╒══════════╤═════════════════════════════════════════════════════════════════╕ │ 0.000s │ Save success cache │ │ 0.573s │ Versions │ │ 0.001s │ Run phpbench │ │ 32.388s │ Zuul clone {"branch": "master", "cache_dir": "/srv/git", │ │ │ "projects": ["mediawiki/core", "mediawiki/skins/Vector", │ │ │ "mediawiki/vendor"], "workers": 4, "workspace": │ │ │ "/workspace/src", "zuul_branch": "master", "zuul_project": │ │ │ "mediawiki/core", "zuul_ref": │ │ │ "refs/zuul/master/Z9ddd61a6894b451ea4e6058c9fdda679", │ │ │ "zuul_url": "git://contint1002.wikimedia.org"} │ │ 0.086s │ Check success cache │ │ 10.412s │ Install composer dev-requires for vendor.git │ │ 2.166s │ Start backends: <MySQL (no socket)> │ │ 16.243s │ Run Post-dependency install, pre-database dependent steps in │ │ │ parallel (concurrency=2): * Install MediaWiki, db=<MySQL │ │ │ /workspace/db/quibble-mysql-ifwx2ix0/socket> * npm install in │ │ │ /workspace/src │ │ 15.171s │ PHPUnit unit tests │ │ 68.271s │ PHPUnit default suite (without database or standalone) │ │ 98.161s │ Run 'composer test' and 'npm test' in parallel (concurrency=2): │ │ │ * composer test for mediawiki/core * npm test in /workspace/src │ │ 0.010s │ Start backends: <ExternalWebserver http://127.0.0.1:9413 │ │ │ /workspace/src> <Xvfb :94> <ChromeWebDriver :94> │ │ 14.920s │ Run QUnit tests │ │ 157.602s │ Browser tests: mediawiki/core, mediawiki/skins/Vector, │ │ │ mediawiki/vendor │ │ 29.741s │ Run API-Testing │ │ 408.583s │ PHPUnit default suite (with database) │ ╘══════════╧═════════════════════════════════════════════════════════════════╛ ``` Thoughts?
    • Task
    `wdqs1028` filesystem is corrupted. `dmesg -T` reports a number of I/O errors: ``` [Fri Feb 13 12:33:35 2026] Aborting journal on device md2-8. [Fri Feb 13 12:33:35 2026] Buffer I/O error on dev md2, logical block 448823296, lost sync page write [Fri Feb 13 12:33:35 2026] JBD2: I/O error when updating journal superblock for md2-8. [Fri Feb 13 12:33:46 2026] EXT4-fs error (device md2): ext4_journal_check_start:84: comm mkdir: Detected aborted journal [Fri Feb 13 12:33:46 2026] Buffer I/O error on dev md2, logical block 0, lost sync page write [Fri Feb 13 12:33:46 2026] EXT4-fs (md2): I/O error while writing superblock [Fri Feb 13 12:33:46 2026] EXT4-fs (md2): Remounting filesystem read-only ``` @BTullis did some initial troubleshooting: ``` Ben Tullis  [1:57 PM] We are using software RAID on this server, but it's not happy. btullis@wdqs1028:~$ cat /proc/mdstat Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sda3[1] 78058496 blocks super 1.2 [2/1] [_U] md1 : active raid1 sda4[1] 999424 blocks super 1.2 [2/1] [_U] md2 : broken raid0 sdb5[0] sda5[1] 3591729152 blocks super 1.2 512k chunks unused devices: <none>Ben Tullis  [1:58 PM] See the [_U] bit on md0 and md1 that means that one disk has dropped out of the array, but it's still active. But md2 says it's broken. Ben Tullis  [2:00 PM] This is a little further up the dmesg output. image.png Ben Tullis  [2:00 PM] So a SATA link was reset, then /dev/sdb got disconnected. Then a new /dev/sdc was detected. Presumably, the same disk. ``` This is a dev host, so we assume data loss. But it would be good to know if there's chances (and timelines) for RAID config to be repaired.
    • Task
    Some indexes are down right now: {F72048160} This happened about once a day for at least the last week or so (e.g. {T417147}). There were also multiple full outages recently (not all have a task though as usually it recovers after about an hour): {T416614} {T416488}
    • Task
    As discussed in T394476#11610555 it seems a good idea to upgrade apus' ceph to 18.2.7 (or 18.2.8 if available), to remove any known bug that has been fixed.
    • Task
    == GitLab account activation == **To activate your account, please sign into GitLab and then answer the following question:** * Developer account / GitLab username: Wyslijp16 Toolforge == Activation checklist == [] User has provided the following: developer account username [] User has an existing developer account, and has used it to log in to GitLab If any of the following criteria are met, user should be approved immediately: [] User has a history of contributions on-wiki, on Gerrit, Phabricator, etc. [] User is known to the admin [] User is vouched for by a known contributor [] User is a member of a movement organization
    • Task
    We want to understand and observe the impact that a network switch down has on cloud, under controlled conditions. The results will give us a better idea on how to proceed with {T414835} and see how far we've come to address {T375204}. The main driving force behind these tests is ceph failure scenarios and resiliency, though considering cloud as a whole is worthwhile. There are of course a spectrum of possibilities for the tests: from simply rebooting the switch and observe the effects, to shutting down progressively more ports, to maybe something else I'm forgetting now (?) I have reviewed the rack allocation (P88809) and I think a good candidate to start with is C8: there are no cloudvirts, relatively few ceph TB compared to the rest (150) so in theory the impact should be zero/minimal. Questions I have in mind: 1. To what extent shutting the individual ports differs from the switch rebooting? In terms of what other hosts on the network experience that is. What I'm getting at here is whether we can realistically and progressively simulate a switch rebooting without doing it all at once. 1. For non-ceph hosts in C8 (namely control, gw, lb, net, rabbit, services) is automatic failover and/or minimal impact expected on switch reboot? For 1. I'm cc'ing @ayounsi and @cmooney to help answer, whereas for 2. maybe @taavi @Andrew you have ideas/insights ?
    • Task
    **Steps to replicate the issue** (include links if applicable): 1. Install a custom skin that does **not** include a `mw-content-subtitle` element. 2. Open any page in VisualEditor. 3. Edit the page and save changes. **What happens?** After saving, VisualEditor “hangs” and the edit dialog is not properly cleaned up. The browser console did not show any errors **What should have happened instead?** VisualEditor should successfully close the editing dialog and update the page, even if the custom skin does not include a subtitle element. The javascript logic shouldn't stop working. **Software version:** MediaWiki 1.43, VisualEditor latest for this version, custom skin applied **Other information:** I'm sure the issue is inside ``` VisualEditor/modules/ve-mw/init/targets/ve.init.mw.ArticleTarget.js ``` Specifically, the following code inside the function `replacePageContent`: ``` mw.util.clearSubtitle(); mw.util.addSubtitle(contentSub); ``` The util function throws an Error when mw-content-subtitle does not exist in the DOM, which stops further cleanup of the VE dialog. Notes: I could not find anywhere in MediaWiki or VisualEditor documentation that mw-content-subtitle is a required element for custom skins. **Suggested fix:** - In ve.init.mw.ArticleTarget.js, wrap the addSubtitle call in a try/catch or check for the existence of the element before attempting to update the subtitle. - This might also be considered for MediaWiki core if mw.util.addSubtitle is expected to work without crashing the Javascript - clarify in the documentation that custom skins must include mw-content-subtitle.
    • Task
    build2002 has been around for a while, but various image builds and reporting timers still run from build2001 and need to be migrated until build2001 can be decommissioned.
    • Task
    We track statistics about Automoderator's behaviour on the public Superset instance: https://superset.wmcloud.org/superset/dashboard/unified-automoderator-activity-dashboard/ This instance is scheduled for shutdown at the end of March: {T416373} We need to evaluate whether we have other options for providing this data. It may also be worth spending some time researching the value this dashboard provides - who is using it? Which metrics do they find particularly valuable?
    • Task
    A/C: there is an ADR documenting the (temporary) move to the Action API
    • Task
    ZMultilingualStringDialog fetches the Suggested languages, but they are already fetched in the prefetchData in APP.vue, so it seems a bit redundant to do that again. Check if we can remove this: ``` // Data fetching /** * Checks if there are no visible local items and fetches common language ZIDs if needed. * This helper method centralizes the logic for determining when to fetch common languages. */ function fetchCommonLanguagesIfNeeded() { if ( getAvailableLanguages.value.length === 0 ) { store.fetchZids( { zids: Constants.SUGGESTIONS.LANGUAGES } ); } } // Watch watch( () => props.items, () => { fetchCommonLanguagesIfNeeded(); } ); // Lifecycle onMounted( () => { fetchCommonLanguagesIfNeeded(); } ); ```
    • Task
    This is non abstract related but something I came across looking at the various store files. Stores handle “in-flight promise” dedupe in several ways: different state names (requests, languageCodePromises, pendingPromises, rendererPromises, testResultsPromises), different cleanup (direct delete vs setters), and different cache shapes (separate cache vs promise stored in the result slot). Unify by adding a small shared helper (e.g. createDedupeFetcher) that implements: check cache → check in-flight map → run fetch → write cache → clear in-flight on settle. Migrate key-based fetchers (languageCode, zhtml sanitize, renderer, testResults; optionally wikidata) to use it; keep batch (fetchZids) and array (parserPromises) as special cases. Set up naming conventions. NB: The proposed approach is very specific and is open to any better ideas obviously! **Summary** Unify how stores track in-flight promises and dedupe fetches so the pattern is consistent and maintainable. **Current state** Stores use several different patterns: * **library.js**: `requests` (zid → batch promise) and Soon: (`languageCodePromises` (code → promise); cleanup via setter or direct delete in `.finally`.) * **zhtml.js**: `pendingPromises` (Map, hash → promise); cleanup in `.then`/`.catch`. * **ztype.js**: `rendererPromises` (cacheKey → promise) with setter `setRendererPromise`; `parserPromises` (array) for collective wait. * **testResults.js**: `testResultsPromises` with setter `setTestResultsPromise`. * **wikidata (items/properties/lexemes)**: No separate promise map; store the promise in the same slot as the result (`items[id]` = data or Promise). Inconsistencies: naming (requests vs *Promises), cleanup (when/where to delete), and API (direct assign vs setter). **Proposed approach** 1. Add a shared utility (e.g. in store utils): **createDedupeFetcher(ctx, promisesStateKey, getCached, fetch, setCache, opts)** that: * Returns cached value if present. * Returns existing in-flight promise if present. * Otherwise runs fetch, stores promise in ctx[promisesStateKey][key], on settle writes cache and clears in-flight (e.g. in `.finally`). 2. Migrate key-based fetchers to use it: fetchLanguageCode (library), sanitiseHtml (zhtml), runRenderer (ztype), fetchTestResults (testResults); optionally wikidata. 3. Standardise in-flight state naming to `*Promises` where applicable. 4. Leave batch (fetchZids) and array (parserPromises) as special cases unless we add a batch/array helper later. **Acceptance criteria** * [ ] Shared dedupe-fetcher utility exists and is documented. * [ ] At least library (languageCode), zhtml, ztype (renderer), and testResults use it (or document why not). * [ ] In-flight maps use a consistent naming convention. * [ ] No behavioural regressions; existing tests pass.
    • Task
    In order to simplify network policy management, evaluate migration of the mediawiki chart to external-services. Potential issues: - mw-script/mw-cron namespace-stable network policies (not per-release) - mcrouter daemonset access - otelcol access
    • Task
    [Wikibase secondary CI](https://github.com/wikimedia/mediawiki-extensions-Wikibase/actions/workflows/secondaryCI.yml) and [EntitySchema daily CI](https://github.com/wikimedia/mediawiki-extensions-EntitySchema/actions/workflows/dailyCI.yml) are broken: ```counterexample Run mirromutth/mysql-action@v1.1 /usr/bin/docker run --name eea360b67c323d44bd8c6f363818dd845e_5216e3 --label 1466ee --workdir /github/workspace --rm -e "COMPOSER_HOME" -e "INPUT_MYSQL_VERSION" -e "INPUT_MYSQL_DATABASE" -e "INPUT_MYSQL_ROOT_PASSWORD" -e "INPUT_HOST_PORT" -e "INPUT_CONTAINER_PORT" -e "INPUT_CHARACTER_SET_SERVER" -e "INPUT_COLLATION_SERVER" -e "INPUT_MYSQL_USER" -e "INPUT_MYSQL_PASSWORD" -e "HOME" -e "GITHUB_JOB" -e "GITHUB_REF" -e "GITHUB_SHA" -e "GITHUB_REPOSITORY" -e "GITHUB_REPOSITORY_OWNER" -e "GITHUB_REPOSITORY_OWNER_ID" -e "GITHUB_RUN_ID" -e "GITHUB_RUN_NUMBER" -e "GITHUB_RETENTION_DAYS" -e "GITHUB_RUN_ATTEMPT" -e "GITHUB_ACTOR_ID" -e "GITHUB_ACTOR" -e "GITHUB_WORKFLOW" -e "GITHUB_HEAD_REF" -e "GITHUB_BASE_REF" -e "GITHUB_EVENT_NAME" -e "GITHUB_SERVER_URL" -e "GITHUB_API_URL" -e "GITHUB_GRAPHQL_URL" -e "GITHUB_REF_NAME" -e "GITHUB_REF_PROTECTED" -e "GITHUB_REF_TYPE" -e "GITHUB_WORKFLOW_REF" -e "GITHUB_WORKFLOW_SHA" -e "GITHUB_REPOSITORY_ID" -e "GITHUB_TRIGGERING_ACTOR" -e "GITHUB_WORKSPACE" -e "GITHUB_ACTION" -e "GITHUB_EVENT_PATH" -e "GITHUB_ACTION_REPOSITORY" -e "GITHUB_ACTION_REF" -e "GITHUB_PATH" -e "GITHUB_ENV" -e "GITHUB_STEP_SUMMARY" -e "GITHUB_STATE" -e "GITHUB_OUTPUT" -e "RUNNER_OS" -e "RUNNER_ARCH" -e "RUNNER_NAME" -e "RUNNER_ENVIRONMENT" -e "RUNNER_TOOL_CACHE" -e "RUNNER_TEMP" -e "RUNNER_WORKSPACE" -e "ACTIONS_RUNTIME_URL" -e "ACTIONS_RUNTIME_TOKEN" -e "ACTIONS_CACHE_URL" -e "ACTIONS_RESULTS_URL" -e "ACTIONS_ORCHESTRATION_ID" -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp":"/github/runner_temp" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/runner/work/mediawiki-extensions-Wikibase/mediawiki-extensions-Wikibase":"/github/workspace" 1466ee:a360b67c323d44bd8c6f363818dd845e Root password not empty, use root superuser Use specified database docker: Error response from daemon: client version 1.40 is too old. Minimum supported API version is 1.44, please upgrade your client to a newer version. See 'docker run --help'. ```
    • Task
    In T409452, we introduced a Vue 'mixin' to share functionality between the edit statement and add statement forms. As part of the changes in T406878, this usage was then removed from the edit statement form, making the mixin unnecessary and also introducing duplicated code between the mixin and the edit statement form. Fix the add statement and edit statement forms either to use a common mixin, or to use their own implementations of `submitForm`. **Acceptance Criteria** - [] The mixin is either used by both the add statement and edit statement forms, or it is used by neither - [] Code duplication is kept to a minimum
    • Task
    Context: https://meta.wikimedia.org/wiki/Community_Wishlist/W443 This is mostly done, but there are some loose ends - particularly checking yt-dlp if we can filter by CCBY because those are the videos we want in commons. Some pending discussions on the talk page.
    • Task
    **Steps to replicate the issue** (include links if applicable): * I followed the guideliness to make an external link clickable, but it is not working yet. * - I mapped the formater url with wikidata * - created a factgrid id with the correct link + $1 * - added the identifier for the item and waited 24h. **What happens?**: * It is not clickable yet **What should have happened instead?**: * It should be clickable and redirect to the external url for my item **Software version** (on `Special:Version` page; skip for WMF-hosted wikis like Wikipedia): * I'm using wikibase.cloud **Other information** (browser name/version, screenshots, etc.):
    • Task
    Please add Schleswig-Holstein to the monuments database Useful information to include: * project - wikipedia * lang - de * code - de-sh * headerTemplate - Denkmalliste Schleswig-Holstein Tabellenkopf * rowTemplate - Denkmalliste Schleswig-Holstein Tabellenzeile * commonsTemplate - Baudenkmal Schleswig-Holstein * commonsTrackerCategory - Cultural heritage monuments in Schleswig-Holstein with known IDs * commonsCategoryBase - Category:Cultural heritage monuments in Schleswig-Holstein * autoGeocode - To automagicly geocode the images at Commons (be careful1) * unusedImagesPage - Wikipedia:WikiProjekt Denkmalpflege/Deutschland/Schleswig-Holstein/Ungenutzte Bilder * imagesWithoutIdPage - Wikipedia:WikiProjekt Denkmalpflege/Deutschland/Schleswig-Holstein/Bilder ohne Nummer * registrantUrlBase - https://efi2.schleswig-holstein.de/dish/dish_suche/html/denkmalErgebnisSeite.html?objektidEingabe=obj$1/1/-/
    • Task
    In T417041 we started presenting error messages generated by the Wikibase backend directly to the user. These strings are in some cases grammatically incorrect or stylistically strange for native speakers (e.g. 'An URL scheme "fish" is not supported', 'This URL misses a scheme like "https://": test.com'). We also do not have a clear overview of which strings might possibly be presented to users and under what circumstances. Review all user-facing Wikibase-generated error messages and check that they are presented in appropriate contexts in Wikibase, updating any messages that are clearly incorrect.
    • Task
    **Feature summary** (what you would like to be able to do and where): To be able to get a named section content by its name, without calculating its section index. Thus besides https://ru.wikipedia.org/w/api.php?action=parse&&format=json&formatversion=2&prop=wikitext&page=Википедия:Форум/Технический&section=14 to be able to like ...action=parse&&format=json&formatversion=2&prop=wikitext&page=Википедия:Форум/Технический&sectionTitle=Получение текста раздела статьи через API **Use case(s)**: It is very common to address|to get a named page section). Yet it is highly awkward over API. One need first get prop=tocdata, calculate client-side section index, then finally call &section-CalculatedIndex I asked at [[https://ru.wikipedia.org/wiki/Википедия:Форум/Технический#Получение_текста_раздела_статьи_через_API | ru-wiki text forum]] but it seems like no any much better solution. **Benefits** (why should this be implemented?): Because it will be the way it is actually needed in 99% of cases. Without doubled API calls and client-side hassles. The actual behavior of that hypothetical &sectionName=... could be just like the one of {{#lsth:Page name|Section name}}
    • Task
    I'm using Ubuntu 24.04, Chrome and Wikimedia 1.46.0-alpha (83df57f) 08:35, 13. Feb. 2026 mit MathJax rendering. The Code \varinjlim\nolimits_{n} and the code \displaystyle \varinjlim\nolimits_{n} will not be rendered correctly with MathJax. In this case, the symbol n should be to the right of the arrow. {F72010003}
    • Task
    Currently, the clarity-tool search functionality requires users to type in full queries before results are displayed. This can be improved by implementing an auto-suggestion feature that dynamically provides article titles or relevant keywords as the user types into the search bar. The goal of this task is to enhance usability and efficiency by helping users discover articles faster and with fewer keystrokes. Auto-suggestions should be context-aware, drawing from the existing database of articles and presenting the most relevant matches based on partial input. Requirements: - Implement a frontend component that displays a dropdown list of suggestions while typing. - Suggestions should update in real-time as the user continues to type. - Ensure that suggestions are ranked by relevance (e.g., prefix matches first, then substring matches). - Integrate with the backend search logic to fetch article titles or keywords efficiently. - Handle edge cases such as empty input, no matches found, or very short queries. - Ensure accessibility (keyboard navigation, screen reader compatibility). - Optimize for performance so that suggestions appear instantly without noticeable lag. Acceptance criteria: - When a user types at least 2–3 characters in the search bar, a list of relevant article suggestions appears. - Selecting a suggestion either auto-completes the search field or directly navigates to the article (depending on design choice). - The feature works consistently across supported browsers and devices. - The implementation is tested with a representative dataset of articles to confirm accuracy and responsiveness. This improvement will make the search experience more intuitive and user-friendly, reducing friction for users who may not know exact article titles. It also aligns with common UX patterns found in modern web applications, increasing the overall polish and professionalism of clarity-tool.
    • Task
    I'm using Ubuntu 24.04, Chrome and Wikimedia 1.46.0-alpha (83df57f) 08:35, 13. Feb. 2026 mit MathJax rendering The code \begin{align} \vartheta_1(z,q) &= \sum_{n=-\infty}^{\infty} \\ \vartheta_2(z,q) &= \sum\limits_{n=-\infty}^{\infty} \end{align} will be rendered with MathJax like in the picture. But in my opinionen the sigma signs must be in the same size. The bigger size is the correct one. I think. {F72009257}
    • Task
    ==== Motivation We've heard feedback that the gray bar is distracting to users and sometimes prompts users to quit their session because it is "annoying". [Recent instrumentation](https://superset.wikimedia.org/superset/dashboard/724/) (screenshot below), shows only a few hundred clicks to the banner links (as opposed to ~20k accounts created per day). So we know that temporary account users are not finding this to be a major nudge to create registered accounts. In light of all this, we are considering removing the gray bar entirely and instead displaying temporary account names just as we display registered user names. This task is to explore design options for indicating to the user that they have a temporary account without the gray bar. {F72006497} ==== Designs TBD ==== Relevant links * {T330510}
    • Task
    Our [[ https://gitlab.com/wmde/technical-wishes/apache_hive_ex/ | Hive ]] adapter generates most of its API from a Thrift definition by code-generating Erlang and Elixir. Currently, the Erlang [[ https://www.erlang.org/doc/system/ref_man_records.html | record ]] structures are present in the Elixir file but the representation is mostly generated explicitly: ``` {:TSessionHandle, {:THandleIdentifier, :undefined, :undefined}} ``` However, this would be better written as: ``` TSessionHandle.record() ``` where the record type should be defined as, ``` @type t :: record(:record, sessionHandle: TSessionHandle.t(), configuration: term()) ``` It should be possible to modify code generated to emit these better types.
    • Task
    **Steps to replicate the issue** (include links if applicable): * Have the right to view edit filters * Go to https://en.wikipedia.org/wiki/Special:AbuseFilter/1094 * Check the browser console **What happens?**: Chromium: `Uncaught NetworkError: Failed to execute 'send' on 'XMLHttpRequest': Failed to load 'https://en.wikipedia.org/w/api.php?action=abusefilterchecksyntax&format=json&filter=[redacted]'.` Firefox: The XHR request is unresolved. **What should have happened instead?**: No errors. The request goes through. **Other information**: The `abusefilterchecksyntax` API call was added in T187686 ([[https://gerrit.wikimedia.org/g/mediawiki/extensions/AbuseFilter/+/master/modules/worker-abusefilter.js|worker-abusefilter.js]]).